#AI logical reasoning
Explore tagged Tumblr posts
wackywatchdotcom · 3 months ago
Text
sometimes i think about the idea that the digital circus is JUST caine. its all what hes made. i mean he named himself, whose to say the circus isnt his own creation, an extension of himself
47 notes · View notes
rathayibacter · 2 months ago
Text
similar to Goodhart's Law (when a measure becomes a target, it fails as a measure), we should also have "when a critique becomes a shibboleth, it stops being critique"
17 notes · View notes
diceroulette · 2 months ago
Text
i could write a whole autism-filled essay about Ai LAS being extensively BPD-coded but also i fear i'd get rocks thrown at me .
EDIT: ALSO TO BE CLEAR I HAVE BPD I HAVE BPD THIS IS WRITTEN BY A BPD HAVER I FORGOT TO FUCKING ADD THAT LMFAO
#blaire.txt#i do worry people would take it as ''oh you're saying the manipulative toxic character has BPD because she's manipulative and toxic''#which is ABSOLUTELY not true. i do not think she has bpd because she was manipulative and toxic to Yuuna#tbc im not denying she was because i mean what the fuck else do you call lying to someone by saying people were laughing at her and judging#her behind her back when you knew they Were Not Fucking Doing That .#but i hate to say it. i can understand exactly why she behaved the way she did in that scene. i still think that it was Wrong to do#but i know the EXACT string of logic that Ai went through all too well.#and it's NOT just that scene. the entire digital apathy ending is like... very clear about it to me#i could honestly ramble about this for hours i absolutely love this game and Ai so much but. alas.#yuuna also has bpd but i think thats more of a given#with ai it's like. THEY GAVE THE AI PROGRAM BPD . /lh#also another unrelated but still fucking painful (/lh) tangent is the digital apathy ending#and how Ai tackles her experiences of love and nonhumanity. how she's incapable of loving in a ''normal'' ''human'' way#because she isnt and can never be human. at best she's an imitation of a character.#made by a human. but never quite being human. knowing despite Yuuna's clear love for Ai Herself as a Person that its also#directly influenced by her being a facsimile of a character she loves. that she'll never be the same as yuuna because she was never#meant to be like her. feel like her. be human like her. and yet... these emotions. these feelings. that which have been claimed to be human#theyre right in front of her. almost within reach... but she cant love like a human right? she cant have that which she wants.#shes a program. something robotic. idkidk im rambling and maybe im being annoying and fake deep but#im ill forever about her. sorry#because thats just so fucking. augh. it hits so close to home for me for so many reasons. i really adore this motif#of characters whose love is defined by their nonhumanity. it just fucks me up so hard LMFAO#anyways ramble over i just . i have so much to say always LMFAO#... not going to maintag this but it definitely deals with spoilers SO#love angel syndrome spoilers
6 notes · View notes
saturatedsinset · 1 year ago
Text
"this is ai generated" "how do you know that?" "I have a sense for these things" and then it gets repeated as fact
28 notes · View notes
tvckerwash · 2 years ago
Text
I hc wash and south (and by proxy, north) as all being ODSTs prior to pfl and one thing I really like about hcing them as such is that it adds another layer of depth to why they were all chosen to be a part of the recovery force.
ODSTs are a special forces unit within the marines, and they're generally used as force amplifiers and in high risk or sensitive operations. two such scenarios include the recovery or recapture of personal and high level assets behind enemy lines, as well as deep reconnaissance and intelligence gathering. ODSTs are also used in politically sensitive operations, which pfl was following the crash of the moi.
so basically, who better to be on the recovery force than former ODSTs who already have a background doing the kind of work that would need to be done?
this also adds to some of the tension between north and south as well imo, as while they're a great team who are capable of working together they clearly have two very different skillsets—south is not portrayed as someone who has the patience necessary for long reconnaissance missions, and part of the reason why team b failed so spectacularly is because two snipers and an intelligence operative are not a good choice for a smash and grab mission. had north been replaced with south things would've probably went way better for them, because south is actually fairly similar to wash in her "get in, get it done, get out" mentality, though where wash comes off as more methodical and is willing to take that "wait and see" approach, south throws caution to the wind and has a "we'll cross that bridge when we get there" approach.
this is probably why wash and south were (on paper at least) going to get eta and iota—south would've benefited greatly from having an ai that was afraid and anxious as it would force her to slow down and think things through more, and wash getting an ai that was happy and cheerful would force him to loosen up a bit and be less high-strung and serious.
62 notes · View notes
madfantasy · 1 year ago
Text
Tumblr media
Good evening
I almost got my day made by bot heh...
Anyways, sending warm thoughts
22 notes · View notes
httpsawesome · 4 months ago
Text
Okay one thing about the Minecraft movie I cannot get behind (and it is technically spoilers so I'll hide it)
There is NO REASON for all the 80s nostalgia in it? Like the character is a washed up champion gamer for an arcade cabinet fighting game and he can't let go of the fashion or memories of the era and it's like
Why
Beyond the fact that so many movies and shows are drowning in 80s nostalgia what did that bring to a Minecraft movie? He has like, one line in the movie about understanding game mechanics better than anyone else and not only did it not matter it DOESN'T MAKE SENSE since it isn't the same type of game AT ALL
There's NOTHING this element brings to the movie beyond the fact that it is a super easy and lazy shorthand to inform us that this character misses when he was cool and is clinging to the past
It would have made a lot more sense to make this a man obsessed with just video games in general rather than exclusively retro games and seemingly exclusive to retro fighting games and nothing else
2 notes · View notes
jcmarchi · 5 months ago
Text
LLMs Are Not Reasoning—They’re Just Really Good at Planning
New Post has been published on https://thedigitalinsider.com/llms-are-not-reasoning-theyre-just-really-good-at-planning/
LLMs Are Not Reasoning—They’re Just Really Good at Planning
Tumblr media Tumblr media
Large language models (LLMs) like OpenAI’s o3, Google’s Gemini 2.0, and DeepSeek’s R1 have shown remarkable progress in tackling complex problems, generating human-like text, and even writing code with precision. These advanced LLMs are often referred as “reasoning models” for their remarkable abilities to analyze and solve complex problems. But do these models actually reason, or are they just exceptionally good at planning? This distinction is subtle yet profound, and it has major implications for how we understand the capabilities and limitations of LLMs.
To understand this distinction, let’s compare two scenarios:
Reasoning: A detective investigating a crime must piece together conflicting evidence, deduce which ones are false, and arrive at a conclusion based on limited evidence. This process involves inference, contradiction resolution, and abstract thinking.
Planning: A chess player calculating the best sequence of moves to checkmate their opponent.
While both processes involve multiple steps, the detective engages in deep reasoning to make inferences, evaluate contradictions, and apply general principles to a specific case. The chess player, on the other hand, is primarily engaging in planning, selecting an optimal sequence of moves to win the game. LLMs, as we will see, function much more like the chess player than the detective.
Understanding the Difference: Reasoning vs. Planning
To realize why LLMs are good at planning rather than reasoning, it is important to first understand the difference between both terms. Reasoning is the process of deriving new conclusions from given premises using logic and inference. It involves identifying and correcting inconsistencies, generating novel insights rather than just providing information, making decisions in ambiguous situations, and engaging in causal understanding and counterfactual thinking like “What if?” scenarios.
Planning, on the other hand, focuses on structuring a sequence of actions to achieve a specific goal. It relies on breaking complex tasks into smaller steps, following known problem-solving strategies, adapting previously learned patterns to similar problems, and executing structured sequences rather than deriving new insights. While both reasoning and planning involve step-by-step processing, reasoning requires deeper abstraction and inference, whereas planning follows established procedures without generating fundamentally new knowledge.
How LLMs Approach “Reasoning”
Modern LLMs, such as OpenAI’s o3 and DeepSeek-R1, are equipped with a technique, known as Chain-of-Thought (CoT) reasoning, to improve their problem-solving abilities. This method encourages models to break problems down into intermediate steps, mimicking the way humans think through a problem logically. To see how it works, consider a simple math problem:
If a store sells apples for $2 each but offers a discount of $1 per apple if you buy more than 5 apples, how much would 7 apples cost?
A typical LLM using CoT prompting might solve it like this:
Determine the regular price: 7 * $2 = $14.
Identify that the discount applies (since 7 > 5).
Compute the discount: 7 * $1 = $7.
Subtract the discount from the total: $14 – $7 = $7.
By explicitly laying out a sequence of steps, the model minimizes the chance of errors that arise from trying to predict an answer in one go. While this step-by-step breakdown makes LLMs look like reasoning, it is essentially a form of structured problem-solving, much like following a step-by-step recipe. On the other hand, a true reasoning process might recognize a general rule: If the discount applies beyond 5 apples, then every apple costs $1. A human can infer such a rule immediately, but an LLM cannot as it simply follows a structured sequence of calculations.
Why Chain-of-thought is Planning, Not Reasoning
While Chain-of-Thought (CoT) has improved LLMs’ performance on logic-oriented tasks like math word problems and coding challenges, it does not involve genuine logical reasoning. This is because, CoT follows procedural knowledge, relying on structured steps rather than generating novel insights. It lacks a true understanding of causality and abstract relationships, meaning the model does not engage in counterfactual thinking or consider hypothetical situations that require intuition beyond seen data. Additionally, CoT cannot fundamentally change its approach beyond the patterns it has been trained on, limiting its ability to reason creatively or adapt in unfamiliar scenarios.
What Would It Take for LLMs to Become True Reasoning Machines?
So, what do LLMs need to truly reason like humans? Here are some key areas where they require improvement and potential approaches to achieve it:
Symbolic Understanding: Humans reason by manipulating abstract symbols and relationships. LLMs, however, lack a genuine symbolic reasoning mechanism. Integrating symbolic AI or hybrid models that combine neural networks with formal logic systems could enhance their ability to engage in true reasoning.
Causal Inference: True reasoning requires understanding cause and effect, not just statistical correlations. A model that reasons must infer underlying principles from data rather than merely predicting the next token. Research into causal AI, which explicitly models cause-and-effect relationships, could help LLMs transition from planning to reasoning.
Self-Reflection and Metacognition: Humans constantly evaluate their own thought processes by asking “Does this conclusion make sense?” LLMs, on the other hand, do not have a mechanism for self-reflection. Building models that can critically evaluate their own outputs would be a step toward true reasoning.
Common Sense and Intuition: Even though LLMs have access to vast amounts of knowledge, they often struggle with basic common-sense reasoning. This happens because they don’t have real-world experiences to shape their intuition, and they can’t easily recognize the absurdities that humans would pick up on right away. They also lack a way to bring real-world dynamics into their decision-making. One way to improve this could be by building a model with a common-sense engine, which might involve integrating real-world sensory input or using knowledge graphs to help the model better understand the world the way humans do.
Counterfactual Thinking: Human reasoning often involves asking, “What if things were different?” LLMs struggle with these kinds of “what if” scenarios because they’re limited by the data they’ve been trained on. For models to think more like humans in these situations, they would need to simulate hypothetical scenarios and understand how changes in variables can impact outcomes. They would also need a way to test different possibilities and come up with new insights, rather than just predicting based on what they’ve already seen. Without these abilities, LLMs can’t truly imagine alternative futures—they can only work with what they’ve learned.
Conclusion
While LLMs may appear to reason, they are actually relying on planning techniques for solving complex problems. Whether solving a math problem or engaging in logical deduction, they are primarily organizing known patterns in a structured manner rather than deeply understanding the principles behind them. This distinction is crucial in AI research because if we mistake sophisticated planning for genuine reasoning, we risk overestimating AI’s true capabilities.
The road to true reasoning AI will require fundamental advancements beyond token prediction and probabilistic planning. It will demand breakthroughs in symbolic logic, causal understanding, and metacognition. Until then, LLMs will remain powerful tools for structured problem-solving, but they will not truly think in the way humans do.
0 notes
feral-coffee · 8 months ago
Text
Replacement thoughts for the nuance:
-Seeing the nude human body is inherently traumatic Everyone is naked under their clothes, and the human body that we all have, that we all see in ourselves, should not be seen as inherently traumatic. Dysphoria and such exists, sure, however, society has never been comprised of incorporeal personalities - all humans have bodies - and we should stop pretending that some bodyless society is the norm.
-Sex scenes in art are pointless Sometimes sex scenes in art are gratuitous and only there to tempt those looking for porn, sometimes sex scenes are a plot point, sometimes the sex scene falls in between the two. Viewer discretion is advised (aka check the warnings/tags and curate your own experience).
-Wearing kink-related clothing in public is the similar to performing a sex scene in front of unwilling participants Outfits that expose genitalia and are being put to use at Folsom? Yeah, definitely sex related. Swapping out your denim blue jeans for leather pants and NOT going around giving a full dissertation on how the leather makes you/your partner feel? No, the pants are just pants, and the material should not be such a big deal, stop trying to create society-wide uniforms.
-Depicting female characters expressing sexuality is always degrading I don't know how to make this simpler, but women are just as human as men and generally also have thoughts just like men do, and sometimes voice them just as men do. Men, women, non-binary ... we are all human and should not be treated as separate species or as infants in adulthood.
-People's sexual fantasies are always an endorsement of the behavior they want to see in real life As a kid, I fantasized about being a dragon, because 'ooo, pretty'. This never meant I actually expected to become a real-life dragon. Look back at your own life, and you'll likely find something similar. There's a reason the meme of a cat clinging to a wall above a door with the caption 'They said I could be anything, so I became a security camera' was funny. Not all dreams come true, nor are all dreams meant to come true. People are allowed to be silly, it's not that serious.
-Sex work is more traumatic and coercive than other types of work Some people enjoy sex and don't care who they have it with, or are at least less picky about who they have it with. If they find that people are willing to pay them as they indulge in their own personal enjoyment, they may take that money as part of a perfectly fair exchange. Not all sex work involves human trafficking, just as not all human trafficking involves sex. The problem isn't the sex or the sex work, the problem is economic and about how we treat other people under the current classist system in general, aka the work and how workers are treated.
-Sex in a monogamous romantic relationship is somehow morally/fundamentally different from sex outside of romance and/or monogamy. Seeing as even the Bible refers to multiple definitions of what we call marriage, including taking a woman as your wife if her husband dies and you were her brother-in-law, or if you killed her husband on the battlefield, or if you already own her as a slave, this absolutely doesn't even fit with people citing the Bible as their source of morality, which is where that nonsense started. If you are not part of the consentual relationship between the adults in question, then you are not part of that relationship and your opinion does not matter. Deal with your own relationship(s) or lack thereof and get your nose out of everyone else's personal business.
-Some clothing is inherently sexual. The clothing is not alive. It is not sentient. It has no opinion, thoughts, wants, or desires. Any sexuality you see in that clothing is what YOU have chosen to see. The clothes are not the problem. Your inability to get your mind out of the gutter is the problem. Trying to mandate some uniform clothing standard for society will solve nothing. On the other hand, seeking help with your own need to over-sexualize everything and take everything personally might make a world of difference. 'Hypersexual disorder' is still a thing (https://pubmed.ncbi.nlm.nih.gov/19937105/), and claiming non-sexual things are 'inherently sexual' is a listed symptom (this is not a diagnosis and I am not your therapist or psychologist - this is merely something to take into consideration).
Case in point: I have Mormon neighbors who fret over bare shoulders, considering the uncovered shoulders to be risque and inherently sexual. I've lived in places where nobody cared about bare shoulders at all, and still live under the same idea that bare shoulders are just something that happens when it's hot outside and maybe wear sunscreen. We are not the same. I do not see tank tops as any more sexual than the completely non-sexual prairie dresses that many FLDS wear, I do not carry around the same cultural baggage that my neighbors do.
TL;DR Stop believing all the sensationalist crap you see in internet discourse. Stop claiming your own failings are society's fault. Sometimes society is wrong, sure, but sometimes it's just certain individuals who are wrong and you may be one of them.
Lines of thought that seem Normal but are actually rooted in extreme puritanism:
-Seeing the nude human body is inherently traumatic -Sex scenes in art are pointless -Wearing kink-related clothing in public is the similar to performing a sex scene in front of unwilling participants -Depicting female characters expressing sexuality is always degrading -People's sexual fantasies are always an endorsement of the behavior they want to see in real life -Sex work is more traumatic and coercive than other types of work The goal is to treat sex as just another thing people do. That is a much healthier attitude than hiding it! It's not uniquely traumatic, it's not weird to talk about it or include it in society.
88K notes · View notes
in-sightjournal · 11 days ago
Text
Ask A Genius 1454: Context, Meaning, and Logic: Why Humans Still Outperform AI in Understanding
Rick Rosner is an accomplished television writer with credits on shows like Jimmy Kimmel Live!, Crank Yankers, and The Man Show. Over his career, he has earned multiple Writers Guild Award nominations—winning one—and an Emmy nomination. Rosner holds a broad academic background, graduating with the equivalent of eight majors. Based in Los Angeles, he continues to write and develop ideas while…
0 notes
dandelionsprout42 · 5 months ago
Text
Regardless of the very touchy subject on Tumblr that is whenever the mere concept of deadnaming gets brought up in any way whatsoever:
Tumblr media
I. Am. Not. Calling. That. "365 Copilot".
Microsoft Office for life. It's one of the most famous sub-brand names in history (while in the same comparison category as e.g. Pepsi Max and Nintendo Switch), and I am not sitting by and let the brand name at least partially vanish just because Microsoft learnt nothing whatsoever from the "Put NFT into everything" market crash and have jumped on the "Put Aİ into everything" trend mere months later.
1 note · View note
borngeniusworld · 7 months ago
Text
đź§  Ready to Think Smarter? Master These 7 Critical Thinking Exercises!
Saarthi For Success 🔍 A Step-by-Step Guide to Sharpen Your Critical Thinking SkillsCritical thinking is like a muscle—the more you use it, the sharper it gets! 🧠 Here’s a practical roadmap to boost your ability to evaluate arguments, spot logical flaws, and make better decisions.🛠️ Step 1: What is Critical Thinking?Ever tried defining it yourself?Exercise: Write your own definition.Identify…
Tumblr media
View On WordPress
1 note · View note
ly0nstea · 3 months ago
Text
Generative AI is not indirect they are very purposefully and directly taking other peoples art and other creative works and bastardising them through a machine to spit out soulless and profit hungry facsimiles, and calling it derivative is a generous statement, its a copy, a forgery, an imitation, derivative implies some original thought went into it to create something new which is blatantly untrue because AI is fundamentally incapable of reason or though, all it knows how to do is matrix algebra and to reach the result it was trained to reach, every word fed into it is just converted to a string of numbers. All you're really doing is ceding ground to silicon valley
Not to mention but, popular discourse does not have to be legally accurate — "Eat the Rich" is a very popular term in discourse, are you going to decry it because no one actually wants to cannibalise the rich? And it's setting us back because people are going to think the poor are man-eating savages! No. You're not. Because that's obviously not the point of the phrase, just like how "AI Art is Theft" is not an inherently legal observation but a cultural one. AI doesn't just steal potential earnings, or paintings on a wall, it steals culture, it steals potential, it steals creativity from the public, it steals humanity in favour of your aunt wanting to know what her cat would look like if it was in Kiki's Delivery Service. It steals time, and energy, and computer parts that could be going into a million other projects that are worthwhile because theft is (again) not just about the illegal taking of property.
the framing of generative ai as "theft" in popular discourse has really set us back so far like not only should we not consider copyright infringement theft we shouldn't even consider generative ai copyright infringement
11K notes · View notes
vsonker · 10 months ago
Text
OpenAI Launches its First Reasoning Model "GPT-4 Turbo (Grok)" for ChatGPT Enterprise
OpenAI Launches its First Reasoning Model “GPT-4 Turbo (Grok)” for ChatGPT EnterpriseEnglish:OpenAI has made a significant leap in the world of artificial intelligence by launching its first reasoning-focused model, GPT-4 Turbo, also known as “Grok.” This model is an advancement tailored specifically for ChatGPT Enterprise, designed to enhance AI’s ability to understand, analyze, and respond with…
0 notes
forgottenbones · 11 months ago
Text
youtube
How OpenAI fixed AI + math #ai
1 note · View note
txttletale · 3 months ago
Note
i think artists not wanting our work to be fed to ai without our permission is intrinsic reason enough why it shouldn't happen
my political perspective: what artists "want" is completely immaterial to IP legislation, which is & always has been founded on the 'wants' of corporate rightsholders who exploit barriers to entry and monopolistic distribution practices to demand ownership of artist's works upon which they can establish new and ever more garish practices of rentseeking
my personal perspective: to hell with what artists give permisison to be done with their work. you are free to criticize, to rail against, to disparage uses of your work you think are wrong, or facile -- just as cervantes excoriated shoddy third-party sequels in the second part of don quixote -- but if you wish to take preventative measures, to enforce your disapproval upon potential remixers and reusers, i think your sophomoric preciousness about your work makes you an enemy of everything i value about art and culture. you are fighting for a world without cross, without the low end theory, without garfield minus garfield or lasagna cat, without centos or cutups or blackout poetry, without video game modding, without plunderphonics or youtube poop or collage. anne rice's world, a world immeasurably poorer with a dead culture pinned to a board and preserved by immersion in the logics of capital
3K notes · View notes