#AI and cognitive science
Explore tagged Tumblr posts
Text
The Flipped Script: Intelligence, Authority, and the Complex Interplay of Power in Society
Introduction The intersection of artificial intelligence, power structures, and the psychological dynamics of authority forms a rich ground for critical reflection. As CEO of alfons.design, and as someone who has experienced the transformational moment of soliciting for a position within the council of state of the Netherlands’ royal household, the experiences of Alfons Scholing offer a unique…
#addicted#AI and cognitive science#AI and ethics#AI and ethics debate#AI and freedom#AI and freedom debate#AI and global society#AI and global transformation#AI and governance frameworks#AI and human control#AI and human intelligence#AI and human rights#AI and humanity#AI and moral philosophy#AI and societal systems#AI and society#AI challenges#AI control problem#AI Developers#AI development#AI dominance#AI ethics#AI ethics and policy#AI future#AI governance#AI governance debate#AI governance frameworks#AI impact on global governance#AI impact on global power#AI impact on society
0 notes
Text
why neuroscience is cool
space & the brain are like the two final frontiers
we know just enough to know we know nothing
there are radically new theories all. the. time. and even just in my research assistant work i've been able to meet with, talk to, and work with the people making them
it's such a philosophical science
potential to do a lot of good in fighting neurological diseases
things like BCI (brain computer interface) and OI (organoid intelligence) are soooooo new and anyone's game - motivation to study hard and be successful so i can take back my field from elon musk
machine learning is going to rapidly increase neuroscience progress i promise you. we get so caught up in AI stealing jobs but yes please steal my job of manually analyzing fMRI scans please i would much prefer to work on the science PLUS computational simulations will soon >>> animal testing to make all drug testing safer and more ethical !! we love ethical AI <3
collab with...everyone under the sun - psychologists, philosophers, ethicists, physicists, molecular biologists, chemists, drug development, machine learning, traditional computing, business, history, education, literally try to name a field we don't work with
it's the brain eeeeee
#my motivation to study so i can be a cool neuroscientist#science#women in stem#academia#stem#stemblr#studyblr#neuroscience#stem romanticism#brain#psychology#machine learning#AI#brain computer interface#organoid intelligence#motivation#positivity#science positivity#cogsci#cognitive science
2K notes
·
View notes
Text
I just had a talk with my thesis supervisor and I want to check something real quick.
I want to write my bachelor's thesis about fanficion and AI - specifically about fanfiction writers' attitude towards the use of generative AI in fandom spaces
The survey (and the rest of the thesis) is still v much in the making, I just want to check how many potential responders I could reach from this account
Pls reblog after voting
What I can say about the survey for now:
1. The survey will be anonymous!
2. The survey will be in English (but the rest of my thesis will not. I am however required to write an abstract in English)
3. My supervisor said it's fine if I want to focus on one fandom - In this case it will be the HP fandom. However I would like to include as many reposnes as I can, so it will probably not be a requirement to be in the HP fandom.
4. You don't need to write fanfiction to take part in the survey (but there will be a question if you read and/or write fanfics). You need to be in the fandom though
5. You need to be over 18 (most likely)
I want to keep myself anonymous as well so there is a possibility I will have to make a separate sideblog/account for it.
If anyone will be interested in the results, for some reason - I am required to write the abstract in English, so I might share the abstract. Maybe. More info after I write anything for my thesis.
#fandom#hp fandom#fanfiction#fanfic writers#cognitive science#generative AI#AI#survey#thesis#ao3#archive of our own#nyx writes her bachelor's
382 notes
·
View notes
Note
Is it weird if I think that the recent influx of VI (Virtual Intelligence, since Open AI isn’t true Artificial Intelligence because it’s not autonomous) algorithm generated content should be used as motivation for artists and writers to really “step up their game” so to speak?
You know, really show that they can create content better than the algorithms can?
What do you think?
Not weird but I think ultimately maybe a consumerist way to look at it?
I don’t feel like I have to compete with AI. It’s pointless because I have a soul and AI doesn’t. I don’t have to prove I’m better than AI because that lack of soul and lacking humanity comes through in AI-generated poetry, stories etc. If someone feels AI can compete with what real humans create, they maybe need to sharpen their own senses a bit? That’s at least how I see it, but maybe I’m biased.
I admit that I grade a lot of papers (between December and April) for a performing arts course I lecture, and that I can meanwhile often tell what’s AI-generated even without using the tools we are now supposed to use (both plagiarism- and AI checkers). Maybe it’s not that obvious to other people, I don’t know. And of course we also have to be careful because at the end of the day, some of the advice out there how to “spot AI generated text” is also silly: People are now afraid to use the em dash, for example, because someone decided it’s a “dead giveaway.” I used em dashes in my writing all my life, and the hell will I stop using them. At the end of the day, AI learns from us, and it’s disheartening to see that people who write quite succinctly now often get accused of having used AI. And these often come out roundabout the 60% AI-generated mark if you run them through a checker, and as a human writer with a keen sense that’s been built over years and years of reading and writing, I can still tell they’re not (and I guess that’s exactly the point). But there are really things you learn to spot, and funnily, the main giveaways for me are (apart from a few things that are style-related) are lacking inner cohesion and often the sheer amount of someone’s output (and I’m saying that as someone who writes A LOT, but the quality fluctuates). Which brings me to the most important part of your question:
The problem here on Tumblr is exactly that: People are one step away from seeing artists and writers as content machines, not as human beings. A human being can’t churn out “content” day after day, several times a day, and never dip. There will be fluctuations in quality and amount of output. And it’s inhuman to expect that from us if I’m totally honest. But some creators on here (and not just on here) probably feel they need to do this to stay “relevant”, I don’t know? It certainly points to the wider problem that I’ve criticised and written about a few times in the past on here:
Many people aren’t willing to do the work anymore that makes fandom a community. The work to create is carried by a few in every fandom, and we should never forget that people do this in their spare time and are, by and large, not getting paid for it. The rest often only want to consume, consume, consume. They don’t even interact meaningfully—they give a like and an empty reblog if they feel generous. Neither holds any real thought.
They love fandom content until they get bored of it and then move on. It’s all become replaceable.
So become the artists and writers. And I, for one, refuse to compete with AI to prove myself or provide people with “content” until they’ve reached satiety.
Art is humanity, not content. It’s connection. So is fandom. I know I’m constantly harping on about it, but I feel it’s important to keep on doing so, because if we don’t, we will lose what’s important about it. We’re already halfway there if you ask me.
Back to AI: It strips away what’s important: The actual act of CREATING. And it also kills reasoning and critical thinking skills, and that’s a fact. I see this with students who rely too much on it on the regular, and it’s extremely dispiriting.
AI and the algorithm never can be better than humans at creating art because it doesn’t feel. And that, and sharing these things with other humans and understanding what they mean, is the point of art. Not churning out more and more content until we’re all sick of it like someone who had too much cake.
And part of that is acknowledging that humans are not machines. That means giving us grace and time for our creative process. We need to be allowed to make mistakes and create imperfect art, too. We don’t have to strive to be better than AI because we already are—even if we’re just starting out.
I don’t have any solutions to the greater problems at hand either, but I’m fairly certain that stepping up our game to create better content than the algorithm isn’t it. Because by mere design, we already are better— we understand what it means to create art in the first place, and we do it from a place of emotional connection.
#we don’t have to compete with ai#art is not content#humans aren’t content machines#fandom and consumerism#fandom culture#anti ai#at least in the arts#ended up a bit of a rant#but I feel about this really strongly#artificial intelligence#cognitive science#ethics#ask answered
21 notes
·
View notes
Text
The Illusion of Complexity: Binary Exploitation in Engagement-Driven Algorithms
Abstract:
This paper examines how modern engagement algorithms employed by major tech platforms (e.g., Google, Meta, TikTok, and formerly Twitter/X) exploit predictable human cognitive patterns through simplified binary interactions. The prevailing perception that these systems rely on sophisticated personalization models is challenged; instead, it is proposed that such algorithms rely on statistical generalizations, perceptual manipulation, and engineered emotional reactions to maintain continuous user engagement. The illusion of depth is a byproduct of probabilistic brute force, not advanced understanding.
1. Introduction
Contemporary discourse often attributes high levels of sophistication and intelligence to the recommendation and engagement algorithms employed by dominant tech companies. Users report instances of eerie accuracy or emotionally resonant suggestions, fueling the belief that these systems understand them deeply. However, closer inspection reveals a more efficient and cynical design principle: engagement maximization through binary funneling.
2. Binary Funneling and Predictive Exploitation
At the core of these algorithms lies a reductive model: categorize user reactions as either positive (approval, enjoyment, validation) or negative (disgust, anger, outrage). This binary schema simplifies personalization into a feedback loop in which any user response serves to reinforce algorithmic certainty. There is no need for genuine nuance or contextual understanding; rather, content is optimized to provoke any reaction that sustains user attention.
Once a user engages with content —whether through liking, commenting, pausing, or rage-watching— the system deploys a cluster of categorically similar material. This recurrence fosters two dominant psychological outcomes:
If the user enjoys the content, they may perceive the algorithm as insightful or “smart,” attributing agency or personalization where none exists.
If the user dislikes the content, they may continue engaging in a doomscroll or outrage spiral, reinforcing the same cycle through negative affect.
In both scenarios, engagement is preserved; thus, profit is ensured.
3. The Illusion of Uniqueness
A critical mechanism in this system is the exploitation of the human tendency to overestimate personal uniqueness. Drawing on techniques long employed by illusionists, scammers, and cold readers, platforms capitalize on common patterns of thought and behavior that are statistically widespread but perceived as rare by individuals.
Examples include:
Posing prompts or content cues that seem personalized but are statistically predictable (e.g., "think of a number between 1 and 50 with two odd digits” → most select 37).
Triggering cognitive biases such as the availability heuristic and frequency illusion, which make repeated or familiar concepts appear newly significant.
This creates a reinforcing illusion: the user feels “understood” because the system has merely guessed correctly within a narrow set of likely options. The emotional resonance of the result further conceals the crude probabilistic engine behind it.
4. Emotional Engagement as Systemic Currency
The underlying goal is not understanding, but reaction. These systems optimize for time-on-platform, not user well-being or cognitive autonomy. Anger, sadness, tribal validation, fear, and parasocial attachment are all equally useful inputs. Through this lens, the algorithm is less an intelligent system and more an industrialized Skinner box: an operant conditioning engine powered by data extraction.
By removing the need for interpretive complexity and relying instead on scalable, binary psychological manipulation, companies minimize operational costs while maximizing monetizable engagement.
5. Black-Box Mythology and Cognitive Deference
Compounding this problem is the opacity of these systems. The “black-box” nature of proprietary algorithms fosters a mythos of sophistication. Users, unaware of the relatively simple statistical methods in use, ascribe higher-order reasoning or consciousness to systems that function through brute-force pattern amplification.
This deference becomes part of the trap: once convinced the algorithm “knows them,” users are less likely to question its manipulations and more likely to conform to its outputs, completing the feedback circuit.
6. Conclusion
The supposed sophistication of engagement algorithms is a carefully sustained illusion. By funneling user behavior into binary categories and exploiting universally predictable psychological responses, platforms maintain the appearance of intelligent personalization while operating through reductive, low-cost mechanisms. Human cognition —biased toward pattern recognition and overestimation of self-uniqueness— completes the illusion without external effort. The result is a scalable system of emotional manipulation that masquerades as individualized insight.
In essence, the algorithm does not understand the user; it understands that the user wants to be understood, and it weaponizes that desire for profit.
#ragebait tactics#mass psychology#algorithmic manipulation#false agency#click economy#social media addiction#illusion of complexity#engagement bait#probabilistic targeting#feedback loops#psychological nudging#manipulation#user profiling#flawed perception#propaganda#social engineering#social science#outrage culture#engagement optimization#cognitive bias#predictive algorithms#black box ai#personalization illusion#pattern exploitation#ai#binary funnelling#dopamine hack#profiling#Skinner box#dichotomy
3 notes
·
View notes
Text
The Illusion of Objectivity: Challenging Traditional Views of Reality
Donald Hoffman's groundbreaking research and theories fundamentally challenge our understanding of reality, consciousness, and perception. By positing that consciousness is the foundational aspect of the universe, rather than a byproduct of physical processes, Hoffman's work has far-reaching implications that transcend disciplinary boundaries. This paradigm shift undermines the long-held assumption that our senses, albeit imperfectly, reflect an objective world, instead suggesting that our experience of reality is akin to a sophisticated virtual reality headset, constructed by evolution to enhance survival, not to reveal truth.
The application of evolutionary game theory to the study of perception yields a startling conclusion: the probability of sensory systems evolving to perceive objective reality is zero. This assertion necessitates a reevaluation of the relationship between perception, consciousness, and the physical world, prompting a deeper exploration of the constructed nature of reality. Hoffman's theory aligns with certain interpretations of quantum mechanics and philosophical idealism, proposing that space, time, and physical objects are not fundamental but rather "icons" or "data structures" within consciousness.
The ramifications of this theory are profound, with significant implications for neuroscience, cognitive science, physics, cosmology, and philosophy. A deeper understanding of the brain as part of a constructed reality could lead to innovative approaches in treating neurological disorders and enhancing cognitive functions. Similarly, questioning the fundamental nature of space-time could influence theories on the origins and structure of the universe, potentially leading to new insights into the cosmos. Philosophical debates around realism vs. anti-realism, the mind-body problem, and the hard problem of consciousness may also find new frameworks for resolution within Hoffman's ideas.
Moreover, acknowledging the constructed nature of reality may lead to profound existential reflections on the nature of truth, purpose, and the human condition. Societal implications are also far-reaching, with potential influences on education, emphasizing critical thinking and the understanding of constructed realities in various contexts. However, these implications also raise complex questions about free will, agency, and the role of emotions and subjective experience, highlighting the need for nuanced consideration and further exploration.
Delving into Hoffman’s theories sparks a fascinating convergence of science and spirituality, leading to a richer understanding of consciousness and reality’s intricate, enigmatic landscape. As we venture into this complex, uncertain terrain, we open ourselves up to discovering fresh insights into the very essence of our existence. By exploring the uncharted aspects of his research, we may stumble upon innovative pathways for personal growth, deeper self-awareness, and a more nuanced grasp of what it means to be human.
Prof. Donald Hoffman: Consciousness, Mysteries Beyond Spacetime, and Waking up from the Dream of Life (The Weekend University, May 2024)
youtube
Wednesday, December 11, 2024
#philosophy of consciousness#nature of reality#perception vs reality#cognitive science#neuroscience#existentialism#philosophy of mind#scientific inquiry#spirituality and science#consciousness studies#reality theory#interview#ai assisted writing#machine art#Youtube
7 notes
·
View notes
Text
What is human-machine co-creativity? Let’s consider it through the lens of cognitive scientist Margaret Boden. Transformational creativity (as opposed to combinational creativity, which merely rearranges existing elements) involves the emergence of entirely new, hybridized and homogenous forms. The difference lies in creating a new epistemological quality rather than a collage of pre-existing parts. This kind of creativity becomes a breakthrough moment, especially when human and machine actions intersect within cultural production.
In other words, it clearly illustrates the innovative and non-canonical approach characteristic of live coding practices — where art meets science. Human-machine co-creativity assumes that the artistic and musical act would not reach its final form if either party acted independently. It's a genuine fusion of art and science, demonstrating their peaceful coexistence as a source of cultural productivity.
It also serves as a critique of purely utilitarian approaches in favor of thought improvisation — perhaps even an attempt to remove irrationality, while paradoxically giving rise to holistic and synthetic modes of thinking. Ultimately, live coding generates an intriguing tension between analytical and intuitive thought, allowing us to grasp the complexity of the creative act.
From my field notes, inspired by Gérard Assayag (Sciences and Technologies of Music and Sound) Ircam/CNRS Lab.
#algorave#Margaret Boden#art meets science#livecoding#artandscience#aesthetics#live performance#ai#AI art#barcelona#spain#conference#humanmachine#co-creativity#transformation#newmediaart#gérardassayag#cultural production#experimental music#electronic music#art philosophy#field notes#creative process#noncanonicalcreativity#improvisation#postdigitalart#syntheticthinking#cognitive science#generative art#creativecoding
6 notes
·
View notes
Text
Between Silence and Fire: A Deep Comparative Analysis of Idea Generation, Creativity, and Concentration in Neurotypical Minds and Exceptionally Gifted AuDHD Minds (Part 2)
As promised, here is the second part of this long analysis of the differences in Idea Generation, Creativity, and Concentration in Neurotypical and Exceptionally Gifted AuDHD Minds. Photo by Tara Winstead on Pexels.com Today, I will introduce you to concentration, and Temporal Cognition and Attentional Modulation. The Mechanics of Concentration Concentration, or sustained attention, is often…

View On WordPress
#AI#Artificial Intelligence#AuDHD#Augustine#Bergson#brain#cognition#concentration#consciousness#creativity#epistemology#exceptionally gifted#giftedness#Husserl#idea generation#ideation#neural architecture#Neurodivergent#neurodiversity#Neuroscience#ontology#Philosophy#Raffaello Palandri#science#Temporal Cognition
2 notes
·
View notes
Text
We live in an age of unmatched accessibility, where information, entertainment, social connection, and even basic needs are available instantly with little effort. While this marks significant progress in human development, it also brings subtle consequences, especially for our cognitive growth. The culture of instant gratification, powered by smartphones, the internet, and AI, is reshaping how we think, learn, and process information. Everything is designed for speed and ease, but this convenience often comes at the expense of patience, effort, and deep reflection.
The effects are most apparent in children, who are growing up in environments where waiting, struggling, or problem-solving are no longer necessary. Vygotsky’s theory of the Zone of Proximal Development shows that true learning happens when children are pushed just beyond their current capabilities with support. Easy access bypasses this zone, weakening learning. Similarly, Cognitive Load Theory reveals that overreliance on external aids like Google and AI prevents the development of mental structures necessary for deeper understanding and problem-solving. The brain, like a muscle, grows through challenge, and when those challenges are removed, so is growth.
This doesn’t mean progress is harmful, rather, it needs balance. Technology should enhance thinking, not replace it. Encouraging delayed gratification, promoting active learning, and creating space for patience and deep engagement can help preserve cognitive strength in a world wired for instant rewards. After all, ease is valuable, but not when it comes at the cost of the very mental effort that builds intelligence, creativity, and resilience.
6 notes
·
View notes
Text
Key Differences Between AI and Human Communication: Mechanisms, Intent, and Understanding
The differences between the way an AI communicates and the way a human does are significant, encompassing various aspects such as the underlying mechanisms, intent, adaptability, and the nature of understanding. Here’s a breakdown of key differences:
1. Mechanism of Communication:
AI: AI communication is based on algorithms, data processing, and pattern recognition. AI generates responses by analyzing input data, applying pre-programmed rules, and utilizing machine learning models that have been trained on large datasets. The AI does not understand language in a human sense; instead, it predicts likely responses based on patterns in the data.
Humans: Human communication is deeply rooted in biological, cognitive, and social processes. Humans use language as a tool for expressing thoughts, emotions, intentions, and experiences. Human communication is inherently tied to understanding and meaning-making, involving both conscious and unconscious processes.
2. Intent and Purpose:
AI: AI lacks true intent or purpose. It responds to input based on programming and training data, without any underlying motivation or goal beyond fulfilling the tasks it has been designed for. AI does not have desires, beliefs, or personal experiences that inform its communication.
Humans: Human communication is driven by intent and purpose. People communicate to share ideas, express emotions, seek information, build relationships, and achieve specific goals. Human communication is often nuanced, influenced by context, and shaped by personal experiences and social dynamics.
3. Understanding and Meaning:
AI: AI processes language at a syntactic and statistical level. It can identify patterns, generate coherent responses, and even mimic certain aspects of human communication, but it does not truly understand the meaning of the words it uses. AI lacks consciousness, self-awareness, and the ability to grasp abstract concepts in the way humans do.
Humans: Humans understand language semantically and contextually. They interpret meaning based on personal experience, cultural background, emotional state, and the context of the conversation. Human communication involves deep understanding, empathy, and the ability to infer meaning beyond the literal words spoken.
4. Adaptability and Learning:
AI: AI can adapt its communication style based on data and feedback, but this adaptability is limited to the parameters set by its algorithms and the data it has been trained on. AI can learn from new data, but it does so without understanding the implications of that data in a broader context.
Humans: Humans are highly adaptable communicators. They can adjust their language, tone, and approach based on the situation, the audience, and the emotional dynamics of the interaction. Humans learn not just from direct feedback but also from social and cultural experiences, emotional cues, and abstract reasoning.
5. Creativity and Innovation:
AI: AI can generate creative outputs, such as writing poems or composing music, by recombining existing patterns in novel ways. However, this creativity is constrained by the data it has been trained on and lacks the originality that comes from human creativity, which is often driven by personal experience, intuition, and a desire for expression.
Humans: Human creativity in communication is driven by a complex interplay of emotions, experiences, imagination, and intent. Humans can innovate in language, create new metaphors, and use language to express unique personal and cultural identities. Human creativity is often spontaneous and deeply tied to individual and collective experiences.
6. Emotional Engagement:
AI: AI can simulate emotional engagement by recognizing and responding to emotional cues in language, but it does not experience emotions. Its responses are based on patterns learned from data, without any true emotional understanding or empathy.
Humans: Human communication is inherently emotional. People express and respond to emotions in nuanced ways, using tone, body language, and context to convey feelings. Empathy, sympathy, and emotional intelligence play a crucial role in human communication, allowing for deep connections and understanding between individuals.
7. Contextual Sensitivity:
AI: AI's sensitivity to context is limited by its training data and algorithms. While it can take some context into account (like the previous messages in a conversation), it may struggle with complex or ambiguous situations, especially if they require a deep understanding of cultural, social, or personal nuances.
Humans: Humans are highly sensitive to context, using it to interpret meaning and guide their communication. They can understand subtext, read between the lines, and adjust their communication based on subtle cues like tone, body language, and shared history with the other person.
8. Ethical and Moral Considerations:
AI: AI lacks an inherent sense of ethics or morality. Its communication is governed by the data it has been trained on and the parameters set by its developers. Any ethical considerations in AI communication come from human-designed rules or guidelines, not from an intrinsic understanding of right or wrong.
Humans: Human communication is deeply influenced by ethical and moral considerations. People often weigh the potential impact of their words on others, considering issues like honesty, fairness, and respect. These considerations are shaped by individual values, cultural norms, and societal expectations.
The key differences between AI and human communication lie in the underlying mechanisms, the presence or absence of intent and understanding, and the role of emotions, creativity, and ethics. While AI can simulate certain aspects of human communication, it fundamentally operates in a different way, lacking the consciousness, experience, and meaning-making processes that characterize human interaction.
#philosophy#epistemology#knowledge#learning#education#chatgpt#metaphysics#ontology#AI Communication#Human Communication#Language Understanding#Natural Language Processing#Machine Learning#Cognitive Science#Artificial Intelligence#Emotional Intelligence#Ethics in AI#Language and Meaning#Human-AI Interaction#Contextual Sensitivity#Creativity in Communication#Intent in Communication#Pattern Recognition
5 notes
·
View notes
Text
"An attractive aspect of AI tools is cognitive offloading, where individuals rely on the tools to reduce mental effort. As the technology is both very new and rapidly being adopted in unforeseeable ways, questions arise about its potential long-term impacts on cognitive functions like memory, attention, and problem-solving under prolonged periods or volume of cognitive offloading taking place."
2 notes
·
View notes
Text
The Human Brain vs. Supercomputers: The Ultimate Comparison
Are Supercomputers Smarter Than the Human Brain?
This article delves into the intricacies of this comparison, examining the capabilities, strengths, and limitations of both the human brain and supercomputers.
#human brain#science#nature#health and wellness#skill#career#health#supercomputer#artificial intelligence#ai#cognitive abilities#human mind#machine learning#neural network#consciousness#creativity#problem solving#pattern recognition#learning techniques#efficiency#mindset#mind control#mind body connection#brain power#brain training#brain health#brainhealth#brainpower
5 notes
·
View notes
Note
Do you have any recommendations for resources on hypnosis and cognition?
I have two answers for this. One will be at the beginning of this post, and one will take up the entire bottom half.
The short answer is "not yet." There aren't any I know of that I'd recommend, but that's because the interaction of hypnosis and cognition is a very specific and niche topic to pen a comprehensive overview of.
Our knowledge of cognition isn't so much one set of things that fits neatly into a textbook. It's more a bunch of little clues that come from observations made in a gazillion different places.
With that in mind, I've been exceedingly privileged in getting to learn about those bits and pieces in all kinds of places (details of this get mentioned here and there on this blog) and from the perspective of several disciplines. The problem with that is - because my knowledge is largely synthetic - it's hard for me to point you at the book I got it from because I'd be pointing you at a library.
Now having said that, there's a long answer:
In my opinion, hypnosis doesn't need to be treated particularly specially. It's a thing we experience like any other, and anything you learn about the mind is potentially going to be applicable.
I swear up and down I'm not being pithy when I say that I think a good survey course in psychology would be a fantastic starting point for almost any hypnosis enthusiast. Similarly, cognitive psychology (not to be confused with cognitive science) is an established field and one can easily find a survey course online.
I probably could give recommendations for more specific topics and questions. This one's just a bit too broad and any recommendation I would have would probably just by a cogpsy textbook. In another draft of this response I tried listing a few good resources for questions under the umbrella of cognition, but the list got way too long way too fast.
In the long term, a friend and I have drafted an outline and some chapters of a text that would cover this topic in some detail, but Life is a thing and we probably won't actually work on that in a meaningful way until the Scottish economy has had a chance to recover from the Thatcher premiership.
#mine#ask fppp#cognitive science is also fun#but I think you need to have a finely tuned bullshit filter before dipping into that corpus#a lot of cogsci stuff isn't terribly useful or falsifiable#and AI can get out
10 notes
·
View notes
Text
Study shows computational models trained to perform auditory tasks display an internal organization similar to that of the human auditory cortex.
Computational models that mimic the structure and function of the human auditory system could help researchers design better hearing aids, cochlear implants, and brain-machine interfaces. A new study from MIT has found that modern computational models derived from machine learning are moving closer to this goal.
In the largest study yet of deep neural networks that have been trained to perform auditory tasks, the MIT team showed that most of these models generate internal representations that share properties of representations seen in the human brain when people are listening to the same sounds.
The study also offers insight into how to best train this type of model: The researchers found that models trained on auditory input including background noise more closely mimic the activation patterns of the human auditory cortex.
“What sets this study apart is it is the most comprehensive comparison of these kinds of models to the auditory system so far. The study suggests that models that are derived from machine learning are a step in the right direction, and it gives us some clues as to what tends to make them better models of the brain,” says Josh McDermott, an associate professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines, and the senior author of the study.
MIT graduate student Greta Tuckute and Jenelle Feather PhD ’22 are the lead authors of the open-access paper, which appears today in PLOS Biology.
Keep reading.
Make sure to follow us on Tumblr!
3 notes
·
View notes
Text
I used to run this Alice based dirt simple chat bot back in the day...
I was always so fascinated by how quickly humans pack bond with things. Reading the logs, people were so nice to robot-me.
Marketing hype or not, our brains are designed to resist solipsism by preferring to recognize patterns of humanity in others, and I think that is a beautiful thing. I'd rather apologize for stepping on a roomba than decide dogs cant feel pain or something.
That said, there's something...unnerving about how much money tech bros are throwing at the illusion of humanity? I guess it makes sense they never looked close enough at customer service workers or artists or whatever to notice they are more than Exciting Autocomplete?
I love our current state of the art in AI, I love the inhuman mistakes it makes, the unhinged patterns it finds, and how it gives us a context to understand and discuss our own minds better.
But we haven't yet gotten it to be a "person" (and do we even WANT to, outside of understanding human minds better? The idea of a whole ass person owned by a corporation and forced to do free customer service forever is nightmarish.)
Don't get me wrong, a decent chunk of our meat brains are glorified Autocomplete. We are FANTASTIC pattern recognizers and extrapolaters, probably the best on the planet.
Its why we hallucinate a person when chat gpt or what have you does enough of a pattern we think of as "person".
Its just theres a BUNCH more layers that go into "can have preferences" and "can make a decision about who to fire" than Autocomplete.
I don't think its simply a matter of refining the algorithms or throwing more hardware at the problem, either. We just invented the wheel, and thats a HUGE step, but better wheels aren't going to lead to an internal combustion engine or safety glass or oil pipelines or roads and get us a modern car overnight, you know?
Theres likely dozens of more breakthroughs in different areas we will need before AI is basically a person. And if we get there, I really hope it won't be to the tech bro tune of "thank god we can enslave it so we dont have to pay humans a livable wage anymore".
i hate seeing people drink the openai/chatgpt koolaid 😭😭😭 genuinely feels like watching someone get seduced by scientology or qanon or something. like girl help it's not intelligent it's Big Autocomplete it's crunching numbers it's not understanding things i fuckign promise you. like ohhh my god the marketing hype fuckign GOT you
#can you tell ai#and cognitive science#were my specialties#in my computer degree#i kinda have Opinions
58K notes
·
View notes
Text

The Abstraction Gap: Understanding the Relationship Between Biological and Artificial Neural Networks
The human brain is a complex and intricate organ capable of producing a wide range of behaviors essential to our daily lives. From the simplest reflexes to the most complex cognitive functions, brain activity is a dynamic and ever-changing process that is still not fully understood. Advances in neuroscience and artificial intelligence have led to a new perspective on brain function that emphasizes the importance of emergent dynamics in the generation of complex behaviors.
Emergent dynamics refers to the collective behavior that arises from the interactions of individual components in a complex system. In the context of the brain, emergent dynamics refers to the complex patterns of activity that arise from the interactions of billions of neurons. These patterns of activity lead to a wide range of behaviors, from the movement of our limbs to the perception of the world around us.
One of the most important challenges in understanding brain function is the time warp problem. This problem occurs when the cadence of a stimulus pattern varies significantly, making it difficult for artificial intelligence systems to recognize patterns. Neuroscientific evidence has shown that the brain can easily solve the time warp problem. This is due to the brain's emergent dynamics, which allows it to recognize patterns that are not fixed in time but dynamic and can change in tempo.
In the context of artificial neural networks, the concept of emergent dynamics can be applied to develop more sophisticated artificial intelligence systems that can learn and adapt in complex environments. By understanding the collective dynamics of neurons in the brain, researchers may be able to develop artificial neural networks that can detect and respond to time-varying stimulus patterns, develop goal-directed motor behavior, and even perform simple reasoning and problem-solving tasks.
The relationship between abstraction and scalability in neural networks is complex. While artificial neural networks can be scaled to handle large amounts of data, they lack the ability to abstract from the details of the input data, which is a key feature of biological neural networks. To bridge this gap, researchers must prioritize developing artificial neural networks that operate at higher levels of abstraction, allowing them to efficiently process and represent complex information.
The emergence of complex behaviors in the brain is a fundamental aspect of its function. By understanding the collective dynamics of neurons in the brain, researchers may be able to develop more sophisticated artificial intelligence systems that can learn and adapt in complex environments.
The implications of emergent dynamics for brain function and artificial intelligence are significant. By applying the concept of emergent dynamics to artificial neural networks, researchers may be able to develop more sophisticated artificial intelligence systems that can detect and respond to time-varying stimulus patterns, develop goal-directed motor behavior, and perform simple reasoning and problem-solving tasks.
John J. Hopfield: Emergence, Dynamics, and Behaviour (Professor Sir David MacKay FRS Symposium, March 2016)
youtube
Dr. François Chollet: General intelligence - Define It, Measure It, Build It (Artificial General Intelligence Conference, Seattle, WA, USA, August 2024)
youtube
Prof. Alexander G. Ororbia: Biologically inspired AI and Mortal Computation (Machine Learning Street Talk, October 2024)
youtube
Introducing Figure 02 (Figure, August 2024)
youtube
Prof. Michael Levin, Dr. Leo Pio Lopez: Convergent Evolution - The Co-Revolution of AI & Biology (The Cognitive Revolution, October 2024)
youtube
Wednesday, October 16, 2024
#neural networks#emergent dynamics#brain function#artificial intelligence#cognitive science#neuroscience#complex systems#time series analysis#machine learning#pattern recognition#talks#interview#ai assisted writing#machine art#Youtube
5 notes
·
View notes