#intuitionistic logic
Explore tagged Tumblr posts
pv1isalsoimportant · 6 months ago
Text
mathematical revelation so great i almost became religious
1K notes · View notes
the-chomsky-hash · 2 years ago
Text
youtube
0 notes
4denthusiast · 3 months ago
Text
I've just noticed an odd combination of beliefs I hold about how maths works. I'm not exactly convinced that every statement in arithmetic (i.e. statements that can be written in terms of 0, the successor function, =, +, ×, and, not, and quantification over natural numbers) is either true or false. (My previously more Platonist views were shaken by some university courses on topics like model theory and Gödel's incompleteness theorems. In order to understand that stuff you have to entertain the possibility that certain seemingly obvious things aren't true, and I then sort of never stopped for some of them.) At the same time, I accept the law of the excluded middle (that P∨¬P is always true). I generally wouldn't describe myself as an intuitinist, even if I am interested in the applications of some intuitionistic logics in computer science.
I think the way I resolve this apparent contradiction is that the reason I don't feel like all arithmetic statements are true or false is that I'm not sure the natural numbers, as a set, are uniquely defined. Any definition of what is and what isn't in ℕ tends to involve some degree of circularity. "It's 0, and S0, and SS0, and so on.", but "so on" for how many steps? A natural number of steps. Hopefully you see the issue. So then, an arithmetic statement may be true of one model of the naturals, but not another. Within any one model, P∨¬P is true (or, more to the point, (∀x. Px)∨¬(∀x. Px) is true), so if it's true in every model it's true, but we can't ever pin down quite which model we're talking about, so the individual statement (∀x. Px) can remain indeterminate.
All this sort of implicitly relies on a separation of the language and meta-language, even though I didn't set out to have a separate meta-language in the first place. I'm not quite sure whether what I'm thinking here even makes sense. Perhaps what I mean is that the meta-language does have logical connectives (and, or, not), so you can form a claim like "(∀x. Px) is true, or ¬(∀x. Px) is true.", but it doesn't have quantification over the naturals, at least not always, because in the meta-language there isn't a unique ℕ, and you can't specify which one you mean because there's no way to totally pin it down. At least I think. But then the semantics of A∨B is meant to be that A∨B is true iff A is true or B is true. I guess we can still recover this by saying that any statement in the language that includes any quantifiers is implicitly with reference to a particular model of ℕ, and a statement is true iff it's true for all models, but then that requires that the meta-language can quantify over models of ℕ, which should be way less possible than quantifying over individual naturals. I don't know how to resolve this, if it even can be resolved. I'm kind of confused.
The true ℕ, if it exists, ought to be the smallest one of course. The trouble is you can't define "smallest" properly without discussing the whole class, which is a less basic concept than the numbers themselves. Also, not every ordered set or class has a smallest element. I think probably if you allow yourself sufficient expressiveness you can prove that in this case there is a smallest (take the intersection or something), but again I don't think you can prove that without making assumptions at least as strong as the conclusion.
The same thing happens with set theory, but there it all feels clearer. In contrast to the naturals where I'm not sure, I feel somewhat more confident that there isn't a single true set-theoretic universe V. There ought to be sets that can't be named (there are only countably many names after all), which makes the universe much trickier to pin down than the naturals. I know there are countable models of ZFC, but they don't feel like they're the real model, and ZFC is itself kind of vague. It leaves a lot of room for rather natural variation in what sets are allowed (e.g. the continuum hypothesis), while non-standard naturals seem much more exotic. If you assume that there's some particular ℕ that's the real ℕ, in the meta-language, this gives you much more solid foundation to use when talking about potential uncertainty in V. You can do induction, talk about constructible sets and stuff. It seems quite likely that the continuum hypothesis doesn't have a definite truth value, even though CH∨¬CH does, but it feels like quite an ordinary sort of indefiniteness, like "He has brown hair." when it isn't clear who "he" refers to. "Is there a cardinal between ω and 2^ω?" What version of the class of cardinals are you talking about?
37 notes · View notes
lilith-hazel-mathematics · 3 months ago
Text
For anyone who doesn't know (which is everyone), we also write stuff on Math Stack Exchange sometimes, asking and answering questions. If you liked our post about the mathematics of unknowability, you might like some of the answers we post there. We tend to post a lot of stuff about logic, computability, set theory, and occasionally analysis/topology. Below is a list of some MSE answers we're proud of.
About computability and arithmetic
We categorized Busy Beaver on the arithmetical hierarchy
Sigma summation and Pi product notation are expressive enough to define most computer programs
Primitive Recursion can perform most instances of finitary wellfounded recursion
Functions with polynomial-time decidable graphs can dominate any computable function
An explicit demonstration of Godel's Beta lemma, shows how Peano Arithmetic can perform recursive definitions.
About logic and set theory
Showing how Hilbert's Epsilon doesn't fit within Intuitionist Logic
Finitary set theory can still define an infinite wellorder, without Specification
The proof power of finitary set theory is strictly weaker without the axiom of Specification
The proof power of ZFC is strictly weaker without the axiom of Specification
A classic proof establishing the existence of order isomorphisms between wellorders
A limitative result on ZF without Choice, gives exact conditions on when ZF is unable to establish an injection between exponents
A more complicated improvement of the previous result
About analysis, geometry, and topology
A functional definition of sine and cosine that doesn't appeal to arclength or calculus
The sequence sin(n)^n densely fills the interval [-1,1]
The sequence (sin(n)^n)/n is summable, and a generalization
Sets of concentric loops are topologically homeomorphic to concentric circles
Showing how non-Euclidean geometries can construct models of Euclidean geometry
18 notes · View notes
mostly-magical-polls · 3 months ago
Text
Fucking Whatever Tournament - Round 3
Tumblr media
Homotopy Type Theory: Recent developpment in proof theory, a domain at the intersection of computer science, math, logic, topology and category theory. It stems from problems coming from the definition of equality in higher-order intuitionistic logics. The central paradigm is to interpret elements of a type as points, equality as paths between points, and types as topological spaces. The main contribution is the "Univalence Axiom", which state that two types are equal if and only if they are equivalent. In computer words: two types that behave the same are the same. This theory has very deep developments that I don't fully understand, but trust me, it's dope!
8 notes · View notes
youzicha · 1 year ago
Text
Consistency and Reducibility: Which is the theorem and which is the lemma?
Here's an example from programming language theory which I think is an interesting case study about how "stories" work in mathematics. Even if a given theorem is unambiguously defined and certainly true, the ways people contextualize it can still differ.
To set the scene, there is an idea that typed programming languages correspond to logics, so that a proof of an implication A→B corresponds to a function of type A→B. For example, the typing rules for simply-typed lambda calculus are exactly the same as the proof rules for minimal propositional logic, adding an empty type Void makes it intuitionistic propositional logic, by adding "dependent" types you get a kind of predicate logic, and really a lot of different programming language features also make sense as logic rules. The question is: if we propose a new programming language feature, what theorem should we prove in order to show that it also makes sense logically?
The story I first heard goes like this. In order to prove that a type system is a good logic we should prove that it is consistent, i.e. that not every type is inhabited, or equivalently that there is no program of type Void. (This approach is classical in both senses of the word: it goes back to Hilbert's program, and it is justified by Gödel's completeness theorem/model existence theorem, which basically says that every consistent theory describes something.)
Usually it is obvious that no values can be given type Void, the only issue is with non-value expressions. So it suffices to prove that the language is normalizing, that is to say every program eventually computes to a value, as opposed to going into an infinite loop. So we want to prove:
If e is an expression with some type A, then e evaluates to some value v.
Naively, you may try to prove this by structural induction on e. (That is, you assume as an induction hypothesis that all subexpressions of e normalize, and prove that e does.) However, this proof attempt gets stuck in the case of a function call like (λx.e₁) e₂. Here we have some function (λx.e₁) : A→B and a function argument e₂ : A. The induction hypothesis just says that (λx.e₁) normalizes, which is trivially true since it's already a value, but what we actually need is an induction hypothesis that says what will happen when we call the function.
In 1967 William Tait had a good idea. We should instead prove:
If e is an expression with some type A, then e is reducible at type A.
"Reducible at type A" is a predicate defined on the structure of A. For base types, it just means normalizable, while for function types we define
e is reducable at type A→B ⇔ for all expressions e₁, if e₁ is reducible at A then (e e₁) is reducible at B.
For example, an function is reducible at type Bool→Bool→Bool if whenever you call it with two normalizing boolean arguments, it returns a boolean value (rather than looping forever).
This really is a very good idea, and it can be generalized to prove lots of useful theorems about programming languages beyond just termination. But the way I (and I think most other people, e.g. Benjamin Pierce in Types and Programming Languages) have told the story, it is strictly a technical device: we prove consistency via normalization via reducibility.
The story works less well when you consider programs that aren't normalizing, which is certainly not an uncommon situation: nothing in Java or Haskell forbids you from writing infinite loops. So there has been some interest in how dependent types work if you make termination-checking optional, with some famous projects along these lines being Idris and Dependent Haskell. The idea here is that if you write a program that does terminate it should be possible to interpret it as a proof, but even if a program is not obviously terminating you can still run it.
At this point, with the "consistency through normalization" story in mind, you may have a bad idea: "we can just let the typechecker try to evaluate a given expression at typechecking-time, and if it computes a value, then we can use it as as a proof!" Indeed, if you do so then the typechecker will reject all attempts to "prove" Void, so you actually create a consistent logic.
If you think about it a little longer, you notice that it's a useless logic. For example, an implication like ∀n.(n² = 3) is provable, it's inhabited by the value (λn. infinite_loop()). That function is a perfectly fine value, even though it will diverge as soon as you call it. In fact, all ∀-statements and implications are inhabited by function values, and proving universally quantified statements is the entire point of using logical proof at all.
So what theorem should you prove, to ensure that the logic makes sense? You want to say both that Void is unprovable, and also that if a type A→B is inhabited, then A really implies B, and so on recursively for any arrow types inside A or B. If you think a bit about this, you want to prove that if e:A, then e is reducible at type A... And in fact, Kleene had already proposed basically this (under the name realizability) as a semantics for Intuitionistic Logic, back in the 1940s.
So in the end, you end up proving the same thing anyway—and none of this discussion really becomes visible in the formal sequence of theorems and lemmas. The false starts need to passed along in the asides in the text, or in tumblr posts.
8 notes · View notes
xeter-group · 2 years ago
Text
classical logic vs intuitionistic logic
When I first heard about intuitionistic logic I was kind of confused. To quickly recap, classical logic is what we call 'normal logic' that you might have been taught in school. We have our AND, our OR, our NOTs, and so on. We also have some laws. For example, NOT (NOT (x)) is the same as x. When I learned about classical logic I thought it was obvious. What else could logic mean? But there are other logics, like intuitionistic logic. Intuitionistic logic *rejects* deducing x from NOT (NOT (x)). What does that even mean??? How does it make sense for NOT (NOT (x)) to be something other than x? What do you mean, "different logic"???? Thats nonsense!
The issue is that I had no idea about the difference between "syntax" and "semantics". In my introductory logic class I was taught there are two ways to prove logical statements - I can draw a big truth table, or I can use the laws of logical inference. The truth table is just me taking a logical formula and substituting in different truth values, and evaluating the operators. The laws of logical inference is applying a sequence of laws like "a AND b = b AND a" and "a AND a = a".
From a formal perspective, these are actually very different. The laws of logical inference are what are called "syntactic rules". That is, you don't need to assign a value to the terms you are operating on. You don't even need to know they represent truth values. You can just manipulate them formally. Using a big truth table is relying on what is called the "semantics" of the logic. That is, you need to remember that a term is either true or false, you need to remember the definitions of "AND" and "OR", and so on.
But there is something interesting about the syntactic rules.
There is nothing inherently "logic-ey" about them. There is nothing thats necessarily "TRUE" or "FALSE". There is a symbol for a tautology, a symbol for a contradiction, and so on, but the only reason we call them tautology or contradiction is because of their behaviour with AND and OR. The ideas of "TRUE" and "FALSE" that we are used to are just one *interpretation* of the rules of classical logic. That is, the laws of logical inference tell us how we can manipulate symbols on a page to 'deduce' things. They are a series of rules, or axioms if you wish, governing a set of values (which we may or may not choose to be TRUE and FALSE) and functions (which we may or may not choose to be NOT, AND, OR). The normal ideas of TRUE, FALSE, AND, NOT, and OR (called boolean logic) are meanings that we can substitute into the aforementioned manipulations that are consistent with them. (For whose who know what a model is, the syntactic rules are a theory, and boolean logic is a model of it.)
Isn't it interesting then that the things you can prove with the laws of logical inference are exactly the same as the things you can prove with a truth table, then? Maybe not. After all, we could just add deduction laws that are true (with respect to boolean logic) until we could prove everything we wanted to. For example, if we can't prove two things are equal that *should* be equal, just made the fact those two are equal a new law of logical inference. But what system do we get if we remove a "load bearing" law? For example, what if we no longer require that NOT (NOT (x)) = x? We would have a weaker system of axioms. We call this intuitionistic logic.
To be clear, you are still allowed to have a system for which that is true. Boolean logic is still a valid meaning to assign to intuitionistic logic, you just can't prove every statement of it. But are there any other interesting systems that satisfy this weaker set of laws? Can we call them "logic"?
Well I don't THINK of them as logic in the sense that they don't talk about truth. Instead the way to think about it is that classical logic is a set of laws that talk about truth, whereas intuitionistic logic is a set of laws that talk about constructability, or provability. For example, a statement is still either true or false. It is "obviously" always true that a statement is true or false. But it is not obviously true that a statement is either provable or disprovable. It is not necessarily true that you can either construct a proof or a counterexample.
Going back to "NOT NOT x = x". Lets say that "x" means "I can prove x", and "NOT x" means "I can disprove x". If I can disprove the fact that you can disprove x, that does not automatically mean that you can prove x. Maybe you can't prove or disprove it. Its still either true or false, we just can't prove it. This is a *different* interpretation of "NOT" and the term "x" that satisfies the rules of intuitionistic logic, but not classical logic. Note that if I can prove x, then I can definitely disprove the fact that you can disprove it. You just can't go the other way around.
So what is true (read: provable) in intuitionistic logic? You can't prove anything in intuitionistic logic that you can't prove in classical logic, because every valid law of deduction in intuitionistic logic is a valid law of classical logic. So we are able to prove strictly fewer things.
Why might we want to do this, then? In the realm of pure maths we often don't care about statements that are not decidable. Well that changes if we are working with a programming language. If I want to construct a function that returns me a value of some type, I want to see the actual value of the type. If I called a function and it just reassured me that there is an output value I'm looking for, I wouldn't be too happy about that. This links into type theory, computer proofs via types, and functional programming. It turns out there is a correspondance between computer programs (with types) and proofs in intuitionistic logic! Its called the "Curry Howard Correspondance". We think of every statement of logic corresponding to a type, and every proof of a statement corresponds to a value of that type. The details are below for those who are interested in computer types, and is pitched more for functional programming inclined people, and assumes some haskell to fully understand it, but is technically self contained?
An implication between two statements corresponds to a function between the two types. TRUE is any type that is inhabited, such as a "unit" type containing one element called (). FALSE is the type containing no values, called Void. NOT x is a function x -> Void. AND is the tuple type, and OR is the disjoint union (Either) type. For example if we have two types A and B, we can form a product type A x B, which has inhabitants of pairs (a,b) where a is type A and b is type B. Similarly we can form A | B which has inhabitants that are either Left a or Right b, where a is of type A and b is of type B.
The statement that A AND B is equivalent to B AND A is the fact that we can construct functions A x B -> B x A and B x A -> A x B, given by the swapping function (a, b) |-> (b,a). The statement that A AND TRUE is equivalent to A can be rethought of as the fact that there are functions between the types A and A x (), given by a |-> (a, ()) and (a, _) |-> a. The statement that A AND FALSE is equivalent to FALSE is the statement that A x Void has no elements.
The statement that NOT NOT x does not imply x is equivalent to the statement that there is no function with type ((x -> Void) -> Void) -> x. Imagine trying to construct such a function. You can't. You don't have any way to produce an x. Note that you can very easily create a function x -> ((x -> Void) -> Void). Thats just function application. Neat, huh?
Another interesting application of the difference between semantics and syntax in programming languages is Conal Elliott's "Compiling to Categories" paper, where he reinterprets the syntax of the programming language haskell to talk about different kinds of functions and objects.
33 notes · View notes
meltorights · 6 months ago
Note
Eric Kripke was the creator and first showrunner for supernatural
trying to decide whether it's more powerful to give birth to supernntural or intuitionist logic . . .
4 notes · View notes
omegaphilosophia · 6 months ago
Text
The Philosophy of Arithmetic
The philosophy of arithmetic examines the foundational, conceptual, and metaphysical aspects of arithmetic, which is the branch of mathematics concerned with numbers and the basic operations on them, such as addition, subtraction, multiplication, and division. Philosophers of arithmetic explore questions related to the nature of numbers, the existence of mathematical objects, the truth of arithmetic propositions, and how arithmetic relates to human cognition and the physical world.
Key Concepts:
The Nature of Numbers:
Platonism: Platonists argue that numbers exist as abstract, timeless entities in a separate realm of reality. According to this view, when we perform arithmetic, we are discovering truths about this independent mathematical world.
Nominalism: Nominalists deny the existence of abstract entities like numbers, suggesting that arithmetic is a human invention, with numbers serving as names or labels for collections of objects.
Constructivism: Constructivists hold that numbers and arithmetic truths are constructed by the mind or through social and linguistic practices. They emphasize the role of mental or practical activities in the creation of arithmetic systems.
Arithmetic and Logic:
Logicism: Logicism is the view that arithmetic is reducible to pure logic. This was famously defended by philosophers like Gottlob Frege and Bertrand Russell, who attempted to show that all arithmetic truths could be derived from logical principles.
Formalism: In formalism, arithmetic is seen as a formal system, a game with symbols governed by rules. Formalists argue that the truth of arithmetic propositions is based on internal consistency rather than any external reference to numbers or reality.
Intuitionism: Intuitionists, such as L.E.J. Brouwer, argue that arithmetic is based on human intuition and the mental construction of numbers. They reject the notion that arithmetic truths exist independently of the human mind.
Arithmetic Truths:
A Priori Knowledge: Many philosophers, including Immanuel Kant, have argued that arithmetic truths are known a priori, meaning they are knowable through reason alone and do not depend on experience.
Empiricism: Some philosophers, such as John Stuart Mill, have argued that arithmetic is based on empirical observation and abstraction from the physical world. According to this view, arithmetic truths are generalized from our experience with counting physical objects.
Frege's Criticism of Empiricism: Frege rejected the empiricist view, arguing that arithmetic truths are universal and necessary, which cannot be derived from contingent sensory experiences.
The Foundations of Arithmetic:
Frege's Foundations: In his work "The Foundations of Arithmetic," Frege sought to provide a rigorous logical foundation for arithmetic, arguing that numbers are objective and that arithmetic truths are analytic, meaning they are true by definition and based on logical principles.
Russell's Paradox: Bertrand Russell's discovery of a paradox in Frege's system led to questions about the logical consistency of arithmetic and spurred the development of set theory as a new foundation for mathematics.
Arithmetic and Set Theory:
Set-Theoretic Foundations: Modern arithmetic is often grounded in set theory, where numbers are defined as sets. For example, the number 1 can be defined as the set containing the empty set, and the number 2 as the set containing the set of the empty set. This approach raises philosophical questions about whether numbers are truly reducible to sets and what this means for the nature of arithmetic.
Infinity in Arithmetic:
The Infinite: Arithmetic raises questions about the nature of infinity, particularly in the context of number theory. Is infinity a real concept, or is it merely a useful abstraction? The introduction of infinite numbers and the concept of limits in calculus have expanded these questions to new mathematical areas.
Peano Arithmetic: Peano's axioms formalize the arithmetic of natural numbers, raising questions about the nature of induction and the extent to which the system can account for all arithmetic truths, particularly regarding the treatment of infinite sets or sequences.
The Ontology of Arithmetic:
Realism vs. Anti-Realism: Realists believe that numbers and arithmetic truths exist independently of human thought, while anti-realists, such as fictionalists, argue that numbers are useful fictions that help us describe patterns but do not exist independently.
Mathematical Structuralism: Structuralists argue that numbers do not exist as independent objects but only as positions within a structure. For example, the number 2 has no meaning outside of its relation to other numbers (like 1 and 3) within the system of natural numbers.
Cognitive Foundations of Arithmetic:
Psychological Approaches: Some philosophers and cognitive scientists explore how humans develop arithmetic abilities, considering whether arithmetic is innate or learned and how it relates to our cognitive faculties for counting and abstraction.
Embodied Arithmetic: Some theories propose that arithmetic concepts are grounded in physical and bodily experiences, such as counting on fingers or moving objects, challenging the purely abstract view of arithmetic.
Arithmetic in Other Cultures:
Cultural Variability: Different cultures have developed distinct systems of arithmetic, which raises philosophical questions about the universality of arithmetic truths. Is arithmetic a universal language, or are there culturally specific ways of understanding and manipulating numbers?
Historical and Philosophical Insights:
Aristotle and Number as Quantity: Aristotle considered numbers as abstract quantities and explored their relationship to other categories of being. His ideas laid the groundwork for later philosophical reflections on the nature of number and arithmetic.
Leibniz and Binary Arithmetic: Leibniz's work on binary arithmetic (the foundation of modern computing) reflected his belief that arithmetic is deeply tied to logic and that numerical operations can represent fundamental truths about reality.
Kant's Synthetic A Priori: Immanuel Kant argued that arithmetic propositions, such as "7 + 5 = 12," are synthetic a priori, meaning that they are both informative about the world and knowable through reason alone. This idea contrasts with the empiricist view that arithmetic is derived from experience.
Frege and the Logicization of Arithmetic: Frege’s attempt to reduce arithmetic to logic in his Grundgesetze der Arithmetik (Basic Laws of Arithmetic) was a foundational project for 20th-century philosophy of mathematics. Although his project was undermined by Russell’s paradox, it set the stage for later developments in the philosophy of mathematics, including set theory and formal systems.
The philosophy of arithmetic engages with fundamental questions about the nature of numbers, the existence of arithmetic truths, and the relationship between arithmetic and logic. It explores different perspectives on how we understand and apply arithmetic, whether it is an invention of the human mind, a discovery of abstract realities, or a formal system of rules. Through the works of philosophers like Frege, Kant, and Leibniz, arithmetic has become a rich field of philosophical inquiry, raising profound questions about the foundations of mathematics, knowledge, and cognition.
2 notes · View notes
anocana · 1 year ago
Text
this article goes into a lot of stuff that's way beyond me, but i think the section on constructive mathematics has a really worthwhile way of looking at things:
Constructive mathematics begins by removing the principle of excluded middle, and therefore the axiom of choice, because choice implies excluded middle. But why would anybody do such an outrageous thing?
I particularly like the analogy with Euclidean geometry. If we remove the parallel postulate, we get absolute geometry, also known as neutral geometry. If after we remove the parallel postulate, we add a suitable axiom, we get hyperbolic geometry, but if we instead add a different suitable axiom we get elliptic geometry. Every theorem of neutral geometry is a theorem of these three geometries, and more geometries. So a neutral proof is more general.
When I say that I am interested in constructive mathematics, most of the time I mean that I am interested in neutral mathematics, so that we simply remove excluded middle and choice, and we don't add anything to replace them. So my constructive definitions and theorems are also definitions and theorems of classical mathematics.
Occasionally, I flirt with axioms that contradict the principle of excluded middle, such as Brouwerian intuitionistic axioms that imply that "all functions (N -> 2) -> N are uniformly continuous", when we equip the set 2 with the discrete topology and N with the product topology, so that we get the Cantor space. The contradiction with classical logic, of course, is that using excluded middle we can define non-continuous functions by cases. Brouwerian intuitionistic mathematics is analogous to hyperbolic or elliptic geometry in this respect. The "constructive" mathematics I am talking about in this post is like neutral geometry, and I would rather call it "neutral mathematics", but then nobody would know what I am talking about. That's not to say that the majority of mathematicians will know what I am talking about if I just say "constructive mathematics".
But it is not (only) the generality of neutral mathematics that I find attractive. Somehow magically, constructions and proofs that don't use excluded middle or choice are automatically programs. The only way to define non-computable things is to use excluded middle or choice. There is no other way. At least not in the underlying type theories of proof assistants such as NuPrl, Coq, Agda and Lean. We don't need to consider Turing machines to establish computability. What is a computable sheaf, anyway? I don't want to pause to consider this question in order to use a sheaf topos to compute a number. We only need to consider sheaves in the usual mathematical sense.
Sometimes people ask me whether I believe in the principle of excluded middle. That would be like asking me whether I believe in the parallel postulate. It is clearly true in Euclidean geometry, clearly false in elliptic and in hyperbolic geometries, and deliberately undecided in neutral geometry. Not only that, in the same way as the parallel postulate defines Euclidean geometry, the principle of excluded middle and the axiom of choice define classical mathematics.
4 notes · View notes
perkwunos · 2 years ago
Text
It might be objected that Whitehead himself, in the opening chapter, writes (p. 12) that "philosophy has been misled by the example of mathematics; and even in mathematics the statement of the ultimate logical principles is beset with difficulties, as yet insuperable." Also, (pp. 11-12) "philosophy has been haunted by the unfortunate notion that its method is dogmatically to indicate premises which are severally clear, distinct, and certain; and to erect upon those premises a deductive system of thought." On the other hand, Whitehead emphasizes that the categoreal scheme must be "coherent" and "logical," and that (p. 5) "the term 'logical' has its ordinary meaning, including 'logical' consistency, or lack of contradiction, the definition of con[1]structs in logical terms, the exemplification of general logical notions in specific instances, and the principles of inference." Also (p. 13) "the use of the categoreal scheme ... is to argue from it boldly and with rigid logic. The scheme should therefore be stated with the utmost precision and definiteness, to allow of such argumentation." ... "Speculative boldness (p. 25) must be balanced by complete humility before logic, and before fact." There is no conflict between these two types of statements if it is recognized (p. 12) that "the accurate expression of the final generalities is the goal of discussion and not its origin" and that "metaphysical categories ... are tentative formulations of the ultimate generalities." Thus even tentative statements are to be expressed "with the utmost precision and definiteness" and with "complete humility before logic." If "the logician's alternative, true or false" is applied to the scheme of philosophic categories regarded "as one complex assertion ... the answer must be that the scheme is false. The same answer must be given to a like question respecting the existing formulated principles of any science." The categoreal scheme is put forward rather in a provisory way, to be improved upon by further reflection, better formulation, deeper insight, and discovery of further facts, scientific laws, and so on. Thus it is not "dogmatically" contended that the items of the categoreal scheme are "severally clear, distinct, and certain." Such a contention would indeed be unfortunate, and has been abandoned for the most part even in mathematics. Not only the "difficulties, as yet insuperable" that infect Principia Mathematica (as Whitehead noted, p. 12, footnote 3), but also the presence now of various kinds of set--theoretic alternatives, G6del's incompleteness theorem, the Loewenheim-Skolem theorem, various intuitionistic and constructivistic systems--all of these militate against any dogmatically certain rendition of the fundamental notions of mathematics. Whitehead's strictures against mathematics, written before 1929, are based upon an inadequate conception of its foundations and are no longer applicable.
Richard Milton Martin, Whitehead’s Categoreal Scheme and Other Papers
4 notes · View notes
drmikewatts · 16 days ago
Text
IEEE Transactions on Fuzzy Systems, Volume 33, Issue 6, June 2025
1) Brain-Inspired Fuzzy Graph Convolution Network for Alzheimer's Disease Diagnosis Based on Imaging Genetics Data
Author(s): Xia-An Bi, Yangjun Huang, Wenzhuo Shen, Zicheng Yang, Yuhua Mao, Luyun Xu, Zhonghua Liu
Pages: 1698 - 1712
2) Adaptive Incremental Broad Learning System Based on Interval Type-2 Fuzzy Set With Automatic Determination of Hyperparameters
Author(s): Haijie Wu, Weiwei Lin, Yuehong Chen, Fang Shi, Wangbo Shen, C. L. Philip Chen
Pages: 1713 - 1725
3) A Novel Reliable Three-Way Multiclassification Model Under Intuitionistic Fuzzy Environment
Author(s): Libo Zhang, Cong Guo, Tianxing Wang, Dun Liu, Huaxiong Li
Pages: 1726 - 1739
4) Guaranteed State Estimation for H−/L∞ Fault Detection of Uncertain Takagi–Sugeno Fuzzy Systems With Unmeasured Nonlinear Consequents
Author(s): Masoud Pourasghar, Anh-Tu Nguyen, Thierry-Marie Guerra
Pages: 1740 - 1752
5) Online Self-Learning Fuzzy Recurrent Stochastic Configuration Networks for Modeling Nonstationary Dynamics
Author(s): Gang Dang, Dianhui Wang
Pages: 1753 - 1766
6) ADMTSK: A High-Dimensional Takagi–Sugeno–Kang Fuzzy System Based on Adaptive Dombi T-Norm
Author(s): Guangdong Xue, Liangjian Hu, Jian Wang, Sergey Ablameyko
Pages: 1767 - 1780
7) Constructing Three-Way Decision With Fuzzy Granular-Ball Rough Sets Based on Uncertainty Invariance
Author(s): Jie Yang, Zhuangzhuang Liu, Guoyin Wang, Qinghua Zhang, Shuyin Xia, Di Wu, Yanmin Liu
Pages: 1781 - 1792
8) TOGA-Based Fuzzy Grey Cognitive Map for Spacecraft Debris Avoidance
Author(s): Chenhui Qin, Yuanshi Liu, Tong Wang, Jianbin Qiu, Min Li
Pages: 1793 - 1802
9) Reinforcement Learning-Based Fault-Tolerant Control for Semiactive Air Suspension Based on Generalized Fuzzy Hysteresis Model
Author(s): Pak Kin Wong, Zhijiang Gao, Jing Zhao
Pages: 1803 - 1814
10) Adaptive Fuzzy Attention Inference to Control a Microgrid Under Extreme Fault on Grid Bus
Author(s): Tanvir M. Mahim, A.H.M.A. Rahim, M. Mosaddequr Rahman
Pages: 1815 - 1824
11) Semisupervised Feature Selection With Multiscale Fuzzy Information Fusion: From Both Global and Local Perspectives
Author(s): Nan Zhou, Shujiao Liao, Hongmei Chen, Weiping Ding, Yaqian Lu
Pages: 1825 - 1839
12) Fuzzy Domain Adaptation From Heterogeneous Source Teacher Models
Author(s): Keqiuyin Li, Jie Lu, Hua Zuo, Guangquan Zhang
Pages: 1840 - 1852
13) Differentially Private Distributed Nash Equilibrium Seeking for Aggregative Games With Linear Convergence
Author(s): Ying Chen, Qian Ma, Peng Jin, Shengyuan Xu
Pages: 1853 - 1863
14) Robust Divide-and-Conquer Multiple Importance Kalman Filtering via Fuzzy Measure for Multipassive-Sensor Target Tracking
Author(s): Hongwei Zhang
Pages: 1864 - 1875
15) Fully Informed Fuzzy Logic System Assisted Adaptive Differential Evolution Algorithm for Noisy Optimization
Author(s): Sheng Xin Zhang, Yu Hong Liu, Xin Rou Hu, Li Ming Zheng, Shao Yong Zheng
Pages: 1876 - 1888
16) Impulsive Control of Nonlinear Multiagent Systems: A Hybrid Fuzzy Adaptive and Event-Triggered Strategy
Author(s): Fang Han, Hai Jin
Pages: 1889 - 1898
17) Uncertainty-Aware Superpoint Graph Transformer for Weakly Supervised 3-D Semantic Segmentation
Author(s): Yan Fan, Yu Wang, Pengfei Zhu, Le Hui, Jin Xie, Qinghua Hu
Pages: 1899 - 1912
18) Observer-Based SMC for Discrete Interval Type-2 Fuzzy Semi-Markov Jump Models
Author(s): Wenhai Qi, Runkun Li, Peng Shi, Guangdeng Zong
Pages: 1913 - 1925
19) Network Security Scheme for Discrete-Time T-S Fuzzy Nonlinear Active Suspension Systems Based on Multiswitching Control Mechanism
Author(s): Jiaming Shen, Yang Liu, Mohammed Chadli
Pages: 1926 - 1936
20) Fuzzy Multivariate Variational Mode Decomposition With Applications in EEG Analysis
Author(s): Hongkai Tang, Xun Yang, Yixuan Yuan, Pierre-Paul Vidal, Danping Wang, Jiuwen Cao, Duanpo Wu
Pages: 1937 - 1948
21) Adaptive Broad Network With Graph-Fuzzy Embedding for Imbalanced Noise Data
Author(s): Wuxing Chen, Kaixiang Yang, Zhiwen Yu, Feiping Nie, C. L. Philip Chen
Pages: 1949 - 1962
22) Average Filtering Error-Based Event-Triggered Fuzzy Filter Design With Adjustable Gains for Networked Control Systems
Author(s): Yingnan Pan, Fan Huang, Tieshan Li, Hak-Keung Lam
Pages: 1963 - 1976
23) Fuzzy and Crisp Gaussian Kernel-Based Co-Clustering With Automatic Width Computation
Author(s): José Nataniel A. de Sá, Marcelo R.P. Ferreira, Francisco de A.T. de Carvalho
Pages: 1977 - 1991
24) A Biselection Method Based on Consistent Matrix for Large-Scale Datasets
Author(s): Jinsheng Quan, Fengcai Qiao, Tian Yang, Shuo Shen, Yuhua Qian
Pages: 1992 - 2005
25) Nash Equilibrium Solutions for Switched Nonlinear Systems: A Fuzzy-Based Dynamic Game Method
Author(s): Yan Zhang, Zhengrong Xiang
Pages: 2006 - 2015
26) Active Domain Adaptation Based on Probabilistic Fuzzy C-Means Clustering for Pancreatic Tumor Segmentation
Author(s): Chendong Qin, Yongxiong Wang, Fubin Zeng, Jiapeng Zhang, Yangsen Cao, Xiaolan Yin, Shuai Huang, Di Chen, Huojun Zhang, Zhiyong Ju
Pages: 2016 - 2026
0 notes
the-chomsky-hash · 2 years ago
Video
youtube
0 notes
lilith-hazel-mathematics · 2 months ago
Text
Intro post
Hiii welcome to the blog of Hazel and also Lilith!! A mathematics blog that also posts a lot of fandom & SJW content. We sometimes write answers on Math Stack Exchange, here's a list of our greatest hits! You can also check out our mathblr tagged posts, or instead, you can look at this compiled list of cool math posts we made!!!
History of the mathematics of unknowability
Some Godel-themed jokes
Categorizing the symmetric manifolds
Count to 100 challenge, currently at 16. Please continue it!!!
Why we use Von Neumann's ordinal convention
Self-reference is fine actually; a theory about recursion
The Law of Excluded Middle is "not false" in intuitionist logic
Cardinal Arithmetic is Easy
Analyzing the causes of incompleteness
Axiom of Hierarchy
7 notes · View notes
mostly-magical-polls · 4 months ago
Text
Fucking Whatever Tournament - Round 1
Tumblr media
Homotopy: Recent development in proof theory, a domain at the intersection of computer science, math, logic, topology and category theory. It stems from problems coming from the definition of equality in higher-order intuitionistic logics. The central paradigm is to interpret elements of a type as points, equality as paths between points, and types as topological spaces. The main contribution is the "Univalence Axiom", which state that two types are equal if and only if they are equivalent. In computer words: two types that behave the same are the same. This theory has very deep developments that I don't fully understand, but trust me, it's dope!
Innocent Tyrant Song Here
8 notes · View notes
khaosophist · 2 months ago
Text
The mirror is shown.
No, in fact, I thought you were trying to get me mad. I apologize, In essence I just wanted to emphasize that how you conceive of magic may be too strict. I hoped repetition would make you understand that 'magic' can solve all the issues the context brings to you if you let it be free.
Look into magic systems. I was truly surprised that the idea of magic being transcendent of most laws seemed alien to you. Look into meta-magic, which is basically the manipulation of the laws of magic rather than the manifestations of magic.
Heck, here's a system that can push things for you while being based on a basic die system.
https://i.4pcdn.org/tg/1423112192353.pdf
That being said Look into Planck physics and Quantum mechanics. Propositional logic, modal logic, intuitionistic logic, meta-logic, fuzzy logic. This can give you a sense of rational ground for wonders.
Please understand that the word 'Logical' is vast, and most use of the word has become a rhetorical device rather than a human endeavour in understanding.
You also mentioned that something was a 'Genuine Question'. This was a flag to me. Why would one ask a question that is not genuine? This seemed to me that you were half admitting you were mocking me or leading me on, so; of course, I kept my answers short.
Anyways.
Fairies sexy.
Tumblr media
Don’t feed your pocket fairy alcohol 🧚‍♀️
5K notes · View notes