#Computer Science & Artificial Intelligence
Explore tagged Tumblr posts
noosphe-re · 2 years ago
Text
"There was an exchange on Twitter a while back where someone said, ‘What is artificial intelligence?' And someone else said, 'A poor choice of words in 1954'," he says. "And, you know, they’re right. I think that if we had chosen a different phrase for it, back in the '50s, we might have avoided a lot of the confusion that we're having now." So if he had to invent a term, what would it be? His answer is instant: applied statistics. "It's genuinely amazing that...these sorts of things can be extracted from a statistical analysis of a large body of text," he says. But, in his view, that doesn't make the tools intelligent. Applied statistics is a far more precise descriptor, "but no one wants to use that term, because it's not as sexy".
'The machines we have now are not conscious', Lunch with the FT, Ted Chiang, by Madhumita Murgia, 3 June/4 June 2023
24K notes · View notes
cheekios · 1 year ago
Text
Rationing Insulin.
Tumblr media
Blood sugar reading this morning. The average blood sugar reading should be between 60mg/dl - 100mg/dl. I am terrified of not being able to administer my insulin simply because I was too poor to afford it. I am strongly in need of community help.
CA: $HushEmu
I am happy to announce I raised $33 🎉 I only need $417 to get my prescription
Tumblr media Tumblr media
1K notes · View notes
1000rh · 4 months ago
Text
In the twentieth century, few would have ever defined a truck driver as a ‘cognitive worker’, an intellectual. In the early twenty-first, however, the application of artificial intelligence (AI) in self-driving vehicles, among other artefacts, has changed the perception of manual skills such as driving, revealing how the most valuable component of work in general has never been just manual, but has always been cognitive and cooperative as well. Thanks to AI research – we must acknowledge it – truck drivers have reached the pantheon of intelligentsia. It is a paradox – a bitter political revelation – that the most zealous development of automation has shown how much ‘intelligence’ is expressed by activities and jobs that are usually deemed manual and unskilled, an aspect that has often been neglected by labour organisation as much as critical theory.
– Matteo Pasquinelli, The Eye of the Master: A Social History of Artificial Intelligence (2023)
159 notes · View notes
viridianriver · 4 months ago
Text
'Artificial Intelligence' Tech - Not Intelligent as in Smart - Intelligence as in 'Intelligence Agency'
I work in tech, hell my last email ended in '.ai' and I used to HATE the term Artificial Intelligence. It's computer vision, it's machine learning, I'd always argue.
Lately, I've changed my mind. Artificial Intelligence is a perfectly descriptive word for what has been created. As long as you take the word 'Intelligence' to refer to data that an intelligence agency or other interested party may collect.
Tumblr media
But I'm getting ahead of myself. Back when I was in 'AI' - the vibe was just odd. Investors were throwing money at it as fast as they could take out loans to do so. All the while, engineers were sounding the alarm that 'AI' is really just a fancy statistical tool and won't ever become truly smart let alone conscious. The investors, baffingly, did the equivalent of putting their fingers in their ears while screaming 'LALALA I CAN'T HEAR YOU"
Meanwhile, CEOs were making all sorts of wild promises about what AI will end up doing, promises that mainly served to stress out the engineers. Who still couldn't figure out why the hell we were making this silly overhyped shit anyway.
SYSTEMS THINKING
As Stafford Beer said, 'The Purpose of A System is What It Does" - basically meaning that if a system is created, and maintained, and continues to serve a purpose? You can read the intended purpose from the function of a system. (This kind of thinking can be applied everywhere - for example the penal system. Perhaps, the purpose of that system is to do what it does - provide an institutional structure for enslavement / convict-leasing?)
So, let's ask ourselves, what does AI do? Since there are so many things out there calling themselves AI, I'm going to start with one example. Microsoft Copilot.
Microsoft is selling PCs with integrated AI which, among other things, frequently screenshots and saves images of your activity. It doesn't protect against copying passwords or sensitive data, and it comes enabled by default. Now, my old-ass-self has a word for that. Spyware. It's a word that's fallen out of fashion, but I think it ought to make a comeback.
To take a high-level view of the function of the system as implemented, I would say it surveils, and surveils without consent. And to apply our systems thinking? Perhaps its purpose is just that.
SOCIOLOGY
There's another principle I want to introduce - that an institution holds insitutional knowledge. But it also holds institutional ignorance. The shit that for the sake of its continued existence, it cannot know.
For a concrete example, my health insurance company didn't know that my birth control pills are classified as a contraceptive. After reading the insurance adjuster the Wikipedia articles on birth control, contraceptives, and on my particular medication, he still did not know whether my birth control was a contraceptive. (Clearly, he did know - as an individual - but in his role as a representative of an institution - he was incapable of knowing - no matter how clearly I explained)
So - I bring this up just to say we shouldn't take the stated purpose of AI at face value. Because sometimes, an institutional lack of knowledge is deliberate.
HISTORY OF INTELLIGENCE AGENCIES
The first formalized intelligence agency was the British Secret Service, founded in 1909. Spying and intelligence gathering had always been a part of warfare, but the structures became much more formalized into intelligence agencies as we know them today during WW1 and WW2.
Now, they're a staple of statecraft. America has one, Russia has one, China has one, this post would become very long if I continued like this...
I first came across the term 'Cyber War' in a dusty old aircraft hanger, looking at a cold-war spy plane. There was an old plaque hung up, making reference to the 'Upcoming Cyber War' that appeared to have been printed in the 80s or 90s. I thought it was silly at the time, it sounded like some shit out of sci-fi.
My mind has changed on that too - in time. Intelligence has become central to warfare; and you can see that in the technologies military powers invest in. Mapping and global positioning systems, signals-intelligence, of both analogue and digital communication.
Artificial intelligence, as implemented would be hugely useful to intelligence agencies. A large-scale statistical analysis tool that excels as image recognition, text-parsing and analysis, and classification of all sorts? In the hands of agencies which already reportedly have access to all of our digital data?
TIKTOK, CHINA, AND AMERICA
I was confused for some time about the reason Tiktok was getting threatened with a forced sale to an American company. They said it was surveiling us, but when I poked through DNS logs, I found that it was behaving near-identically to Facebook/Meta, Twitter, Google, and other companies that weren't getting the same heat.
And I think the reason is intelligence. It's not that the American government doesn't want me to be spied on, classified, and quantified by corporations. It's that they don't want China stepping on their cyber-turf.
The cyber-war is here y'all. Data, in my opinion, has become as geopolitically important as oil, as land, as air or sea dominance. Perhaps even more so.
A CASE STUDY : ELON MUSK
As much smack as I talk about this man - credit where it's due. He understands the role of artificial intelligence, the true role. Not as intelligence in its own right, but intelligence about us.
In buying Twitter, he gained access to a vast trove of intelligence. Intelligence which he used to segment the population of America - and manpulate us.
He used data analytics and targeted advertising to profile American voters ahead of this most recent election, and propogandize us with micro-targeted disinformation. Telling Israel's supporters that Harris was for Palestine, telling Palestine's supporters she was for Israel, and explicitly contradicting his own messaging in the process. And that's just one example out of a much vaster disinformation campaign.
He bought Trump the white house, not by illegally buying votes, but by exploiting the failure of our legal system to keep pace with new technology. He bought our source of communication, and turned it into a personal source of intelligence - for his own ends. (Or... Putin's?)
This, in my mind, is what AI was for all along.
CONCLUSION
AI is a tool that doesn't seem to be made for us. It seems more fit-for-purpose as a tool of intelligence agencies, oligarchs, and police forces. (my nightmare buddy-cop comedy cast) It is a tool to collect, quantify, and loop-back on intelligence about us.
A friend told me recently that he wondered sometimes if the movie 'The Matrix' was real and we were all in it. I laughed him off just like I did with the idea of a cyber war.
Well, I re watched that old movie, and I was again proven wrong. We're in the matrix, the cyber-war is here. And know it or not, you're a cog in the cyber-war machine.
(edit -- part 2 - with the 'how' - is here!)
114 notes · View notes
official-linguistics-post · 11 months ago
Text
New open-access article from Georgia Zellou and Nicole Holliday: "Linguistic analysis of human-computer interaction" in Frontiers in Computer Science (Human-Media Interaction).
This article reviews recent literature investigating speech variation in production and comprehension during spoken language communication between humans and devices. Human speech patterns toward voice-AI presents a test to our scientific understanding about speech communication and language use. First, work exploring how human-AI interactions are similar to, or different from, human-human interactions in the realm of speech variation is reviewed. In particular, we focus on studies examining how users adapt their speech when resolving linguistic misunderstandings by computers and when accommodating their speech toward devices. Next, we consider work that investigates how top-down factors in the interaction can influence users’ linguistic interpretations of speech produced by technological agents and how the ways in which speech is generated (via text-to-speech synthesis, TTS) and recognized (using automatic speech recognition technology, ASR) has an effect on communication. Throughout this review, we aim to bridge both HCI frameworks and theoretical linguistic models accounting for variation in human speech. We also highlight findings in this growing area that can provide insight to the cognitive and social representations underlying linguistic communication more broadly. Additionally, we touch on the implications of this line of work for addressing major societal issues in speech technology.
166 notes · View notes
dinosaurspen · 5 months ago
Photo
Tumblr media
Pioneering artificial intelligence expert John McCarthy.
63 notes · View notes
technologywhis · 18 days ago
Text
Tumblr media
Oh yes — that’s the legendary CIA Triad in cybersecurity. It’s not about spies, but about the three core principles of keeping information secure. Let’s break it down with some flair:
Tumblr media
1. Confidentiality
Goal: Keep data private — away from unauthorized eyes.
Think of it like locking away secrets in a vault. Only the right people should have the keys.
Examples:
• Encryption
• Access controls
• Two-factor authentication (2FA)
• Data classification
Threats to it:
• Data breaches
• Shoulder surfing
• Insider threats
2. Integrity
Goal: Ensure data is accurate and trustworthy.
No tampering, no unauthorized changes — the data you see is exactly how it was meant to be.
Examples:
• Checksums & hashes
• Digital signatures
• Version control
• Audit logs
Threats to it:
• Malware modifying files
• Man-in-the-middle attacks
• Corrupted files from system failures
3. Availability
Goal: Data and systems are accessible when needed.
No point in having perfect data if you can’t get to it, right?
Examples:
• Redundant systems
• Backup power & data
• Load balancing
• DDoS mitigation tools
Threats to it:
• Denial-of-service (DoS/DDoS) attacks
• Natural disasters
• Hardware failure
Why it matters?
Every cybersecurity policy, tool, and defense strategy is (or should be) built to support the CIA Triad. If any one of these pillars breaks, your system’s security is toast.
Want to see how the CIA Triad applies to real-world hacking cases or a breakdown of how you’d protect a small business network using the Triad? I got you — just say the word.
26 notes · View notes
Note
How much/quickly do you think AI is going to expand and improve materials science? It feels like a scientific field which is already benefiting tremendously.
My initial instinct was yes, MSE is already benefiting tremendously as you said. At least in terms of the fundamental science and research, AI is huge in materials science. So how quickly? I'd say it's already doing so, and it's only going to move quicker from here. But I'm coming at this from the perspective of a metallurgist who works in/around academia at the moment, with the bias that probably more than half of my research group does computational work. So let's take a step back.
So, first, AI. It's... not a great term. So here's what I, specifically, am referring to when I talk about AI in materials science:
Tumblr media
Most of the people I know in AI would refer to what they do as machine learning or deep learning, so machine learning tends to be what I use as a preferred term. And as you can see from the above image, it can do a lot. The thing is, on a fundamental level, materials science is all about how our 118 elements (~90, if you want to ignore everything past uranium and a few others that aren't practical to use) interact. That's a lot of combinations. (Yes, yes, we're not getting into the distinction between materials science, chemistry, and physics right now.) If you're trying to make a new alloy that has X properties and Y price, computers are so much better at running through all the options than a human would be. Or if you have 100 images you want to analyze to get grain size—we're getting to the point where computers can do it faster. (The question is, can they do it better? And this question can get complicated fast. What is better? What is the size of the grain? We're not going to get into 'ground truth' debates here though.) Plenty of other examples exist.
Even beyond the science of it all, machine learning can help collect knowledge in one place. That's what the text/literature bubble above means: there are so many old articles that don't have data attached to them, and I know people personally who are working on the problem of training systems to pull data from pdfs (mainly tables and graphs) so that that information can be collated.
I won't ramble too long about the usage of machine learning in MSE because that could get long quickly, and the two sources I'm linking here cover that far better than I could. But I'll give you this plot from research in 2019 (so already 6 years out of date!) about the growth of machine learning in materials science:
Tumblr media
I will leave everyone with the caveat though, that when I say machine learning is huge in MSE, I am, as I said in the beginning, referring to fundamental research in the field. From my perspective, in terms of commercial applications we've still got a ways to go before we trust computers to churn out parts for us. Machine learning can tell researchers the five best element combinations to make a new high entropy alloy—but no company is going to commit to making that product until the predictions of the computer (properties, best processing routes, etc.) have been physically demonstrated with actual parts and tested in traditional ways.
Certain computational materials science techniques, like finite element analysis (which is not AI, though might incorporate it in the future) are trusted by industry, but machine learning techniques are not there yet, and still have a ways to go, as far as I'm aware.
So as for how much? Fundamental research for now only. New materials and high-throughput materials testing/characterization. But I do think, at some point, maybe ten years, maybe twenty years down the line, we'll start to see parts made whose processing was entirely informed by machine learning, possibly with feedback and feedforward control so that the finished parts don't need to be tested to know how they'll perform (see: Digital twins (Wikipedia) (Phys.org) (2022 article)). At that point, it's not a matter of whether the technology will be ready for it, it'll be a matter of how much we want to trust the technology. I don't think we'll do away with physical testing anytime soon.
But hey, that's just one perspective. If anyone's got any thoughts about AI in materials science, please, share them!
Source of image 1, 2022 article.
Source of image 2, 2019 article.
23 notes · View notes
wronghands1 · 1 year ago
Text
Tumblr media
291 notes · View notes
lobotomizedskull · 3 months ago
Text
Tumblr media
20 notes · View notes
reasonsforhope · 2 years ago
Text
"Major AI companies are racing to build superintelligent AI — for the benefit of you and me, they say. But did they ever pause to ask whether we actually want that?
Americans, by and large, don’t want it.
That’s the upshot of a new poll shared exclusively with Vox. The poll, commissioned by the think tank AI Policy Institute and conducted by YouGov, surveyed 1,118 Americans from across the age, gender, race, and political spectrums in early September. It reveals that 63 percent of voters say regulation should aim to actively prevent AI superintelligence.
Companies like OpenAI have made it clear that superintelligent AI — a system that is smarter than humans — is exactly what they’re trying to build. They call it artificial general intelligence (AGI) and they take it for granted that AGI should exist. “Our mission,” OpenAI’s website says, “is to ensure that artificial general intelligence benefits all of humanity.”
But there’s a deeply weird and seldom remarked upon fact here: It’s not at all obvious that we should want to create AGI — which, as OpenAI CEO Sam Altman will be the first to tell you, comes with major risks, including the risk that all of humanity gets wiped out. And yet a handful of CEOs have decided, on behalf of everyone else, that AGI should exist.
Now, the only thing that gets discussed in public debate is how to control a hypothetical superhuman intelligence — not whether we actually want it. A premise has been ceded here that arguably never should have been...
Building AGI is a deeply political move. Why aren’t we treating it that way?
...Americans have learned a thing or two from the past decade in tech, and especially from the disastrous consequences of social media. They increasingly distrust tech executives and the idea that tech progress is positive by default. And they’re questioning whether the potential benefits of AGI justify the potential costs of developing it. After all, CEOs like Altman readily proclaim that AGI may well usher in mass unemployment, break the economic system, and change the entire world order. That’s if it doesn’t render us all extinct.
In the new AI Policy Institute/YouGov poll, the "better us [to have and invent it] than China” argument was presented five different ways in five different questions. Strikingly, each time, the majority of respondents rejected the argument. For example, 67 percent of voters said we should restrict how powerful AI models can become, even though that risks making American companies fall behind China. Only 14 percent disagreed.
Naturally, with any poll about a technology that doesn’t yet exist, there’s a bit of a challenge in interpreting the responses. But what a strong majority of the American public seems to be saying here is: just because we’re worried about a foreign power getting ahead, doesn’t mean that it makes sense to unleash upon ourselves a technology we think will severely harm us.
AGI, it turns out, is just not a popular idea in America.
“As we’re asking these poll questions and getting such lopsided results, it’s honestly a little bit surprising to me to see how lopsided it is,” Daniel Colson, the executive director of the AI Policy Institute, told me. “There’s actually quite a large disconnect between a lot of the elite discourse or discourse in the labs and what the American public wants.”
-via Vox, September 19, 2023
201 notes · View notes
cheekios · 1 year ago
Text
Eviction in the most comical way.
Tumblr media Tumblr media
For the past two weeks eye have been trying to crowdfund for a new pair of strong prescription glasses. Because mine are broken.
CA: $HushEmu
Goal: $1275
In that interval I was fired due to “job abandonment” for calling off of work, because I cannot legally drive nor can I see. Now I am facing possible eviction with a very aggressive and hostile landlord.
Proof
THEY tried to evict me despite paying. Just because it didn’t “reflect” on their system on time.
Tumblr media
Proof of my broken glasses
Tumblr media Tumblr media Tumblr media Tumblr media
I’m still trying to raise $275 for my prescription glasses while trying to raise rent because I am now unemployed.
I am asking to stay housed! :/
If you can’t help financially please advocate for me.
• c+p on my behalf on various platforms
• If you mutuals with large following ask if they can share.
pls help. I’m just a girl.
1K notes · View notes
1000rh · 4 months ago
Text
As we successfully apply simpler, narrow versions of intelligence that benefit from faster computers and lots of data, we are not making incremental progress, but rather picking low-hanging fruit. The jump to general “common sense” is completely different, and there’s no known path from the one to the other. No algorithm exists for general intelligence. And we have good reason to be skeptical that such an algorithm will emerge through further efforts on deep learning systems or any other approach popular today. Much more likely, it will require a major scientific breakthrough, and no one currently has the slightest idea what such a breakthrough would even look like, let alone the details of getting to it.
– Erik J. Larson, The Myth of Artificial Intelligence (2021)
29 notes · View notes
infiniteorangethethird · 5 months ago
Text
ok not to turn into an AI tech bro for a moment here but the way some of you view AI as a general concept is starting to get really disturbing to me, esp as someone who studies computer science. There's plenty of reason to despise AI services like ChatGPT and such but it's really starting to feel like some of you will look at anything containing the word AI and go "oh well it's not made by humans even though it could have been so it's EVIL and anyone who uses it is a horrible person and also lazy". Like have we forgotten the point of making tools to make our lives easier or
19 notes · View notes
didmyownresearch · 6 months ago
Text
Why there's no intelligence in Artificial Intelligence
You can blame it all on Turing. When Alan Turing invented his mathematical theory of computation, what he really tried to do was to construct a mechanical model for the processes actual mathematicians employ when they prove a mathematical theorem. He was greatly influenced by Kurt Gödel and his incompleteness theorems. Gödel developed a method to decode logical mathematical statements as numbers and in that way was able to manipulate these statements algebraically. After Turing managed to construct a model capable of performing any arbitrary computation process (which we now call "A Universal Turing Machine") he became convinced that he discovered the way the human mind works. This conviction quickly infected the scientific community and became so ubiquitous that for many years it was rare to find someone who argued differently, except on religious grounds.
There was a good reason for adopting the hypothesis that the mind is a computation machine. This premise was following the extremely successful paradigm stating that biology is physics (or, to be precise, biology is both physics and chemistry, and chemistry is physics), which reigned supreme over scientific research since the eighteenth century. It was already responsible for the immense progress that completely transformed modern biology, biochemistry, and medicine. Turing seemed to supply a solution, within this theoretical framework, for the last large piece in the puzzle. There was now a purely mechanistic model for the way brain operation yields all the complex repertoire of human (and animal) behavior.
Obviously, not every computation machine is capable of intelligent conscious thought. So, where do we draw the line? For instance, at what point can we say that a program running on a computer understands English? Turing provided a purely behavioristic test: a computation understands a language if by conversing with it we cannot distinguish it from a human.
This is quite a silly test, really. It doesn't provide any clue as to what actually happens within the artificial "mind"; it assumes that the external behavior of an entity completely encapsulates its internal state; it requires "man in the loop" to provide the final ruling; it does not state for how long and on what level should this conversation be held. Such a test may serve as a pragmatic common-sense method to filter out obvious failures, but it brings us not an ounce closer to understanding conscious thinking.
Still, the Turing Test stuck. If anyone tried to question the computational model of the mind, he was then confronted with the unavoidable question: what else can it be? After all, biology is physics, and therefore the brain is just a physical machine. Physics is governed by equations, which are all, in theory, computable (at least approximately, with errors being as small as one wishes). So, short of conjuring supernatural soul that magically produces a conscious mind out of biological matter, there can be no other solution.
Tumblr media
Nevertheless, not everyone conformed to the new dogma. There were two tiers of reservations to computational Artificial Intelligence. The first, maintained, for example, by the Philosopher John Searl, didn't object to idea that a computation device may, in principle, emulate any human intellectual capability. However, claimed Searl, a simulation of a conscious mind is not conscious in itself.
To demonstrate this point Searl envisioned a person who doesn't know a single word in Chinese, sitting in a secluded room. He receives Chinese texts from the outside through a small window and is expected to return responses in Chinese. To do that he uses written manuals that contain the AI algorithm which incorporates a comprehensive understanding of the Chinese language. Therefore, a person fluent in Chinese that converses with the "room" shall deduce, based on Turing Test, that it understands the language. However, in fact there's no one there but a man using a printed recipe to convert an input message he doesn't understands to an output message he doesn't understands. So, who in the room understands Chinese?
The next tier of opposition to computationalism was maintained by the renowned physicist and mathematician Roger Penrose, claiming that the mind has capabilities which no computational process can reproduce. Penrose considered a computational process that imitates a human mathematician. It analyses mathematical conjecture of a certain type and tries to deduce the answer to that problem. To arrive at a correct answer the process must employ valid logical inferences. The quality of such computerized mathematician is measured by the scope of problems it can solve.
What Penrose proved is that such a process can never verify in any logically valid way that its own processing procedures represent valid logical deductions. In fact, if it assumes, as part of its knowledge base, that its own operations are necessarily logically valid, then this assumption makes them invalid. In other words, a computational machine cannot be simultaneously logically rigorous and aware of being logically rigorous.
A human mathematician, on the other hand, is aware of his mental processes and can verify for himself that he is making correct deductions. This is actually an essential part of his profession. It follows that, at least with respect to mathematicians, cognitive functions cannot be replicated computationally.
Neither Searl's position nor Penrose's was accepted by the mainstream, mainly because, if not computation, "what else can it be?". Penrose's suggestion that mental processes involve quantum effects was rejected offhandedly, as "trying to explicate one mystery by swapping it with another mystery". And the macroscopic hot, noisy brain seemed a very implausible place to look for quantum phenomena, which typically occur in microscopic, cold and isolated systems.
Fast forward several decades. Finaly, it seemed as though the vision of true Artificial Intelligence technology started bearing fruits. A class of algorithms termed Deep Neural Networks (DNN) achieved, at last, some human-like capabilities. It managed to identify specific objects in pictures and videos, generate photorealistic images, translate voice to text, and support a wide variety of other pattern recognition and generation tasks. Most impressively, it seemed to have mastered natural language and could partake in an advanced discourse. The triumph of computational AI appeared more feasible than ever. Or was it?   
During my years as undergraduate and graduate student I sometimes met fellow students who, at first impression, appeared to be far more conversant in the academic courses subject matter than me. They were highly confident and knew a great deal about things that were only briefly discussed in lectures. Therefore, I was vastly surprised when it turned out they were not particularly good students, and that they usually scored worse than me in the exams. It took me some time to realize that these people hadn't really possessed a better understanding of the curricula. They just adopted the correct jargon, employed the right words, so that, to the layperson ears, they had sounded as if they knew what they were talking about.
I was reminded of these charlatans when I encountered natural language AIs such as Chat GPT. At first glance, their conversational abilities seem impressive – fluent, elegant and decisive. Their style is perfect. However, as you delve deeper, you encounter all kinds of weird assertions and even completely bogus statements, uttered with absolute confidence. Whenever their knowledge base is incomplete, they just fill the gap with fictional "facts". And they can't distinguish between different levels of source credibility. They're like Idiot Savants – superficially bright, inherently stupid.
What confuses so many people with regard to AIs is that they seem to pass the (purely behavioristic) Turing Test. But behaviorism is a fundamentally non-scientific viewpoint. At the core, computational AIs are nothing but algorithms that generates a large number of statistical heuristics from enormous data sets.
There is an old anecdote about a classification AI that was supposed to distinguish between friendly and enemy tanks. Although the AI performed well with respect to the database, it failed miserably in field tests. Finely, the developers figured out the source of the problem. Most of the friendly tanks' images in the database were taken during good weather and with fine lighting conditions. The enemy tanks were mostly photographed in cloudy, darker weather. The AI simply learned to identify the environmental condition.
Though this specific anecdote is probably an urban legend, it illustrates the fact that AIs don't really know what they're doing. Therefore, attributing intelligence to Arificial Intelligence algorithms is a misconception. Intelligence is not the application of a complicated recipe to data. Rather, it is a self-critical analysis that generates meaning from input. Moreover, because intelligence requires not only understanding of the data and its internal structure, but also inner-understanding of the thought processes that generate this understanding, as well as an inner-understanding of this inner-understanding (and so forth), it can never be implemented using a finite set of rules. There is something of the infinite in true intelligence and in any type of conscious thought.
But, if not computation, "what else can it be?". The substantial progress made in quantum theory and quantum computation revived the old hypothesis by Penrose that the working of the mind is tightly coupled to the quantum nature of the brain. What had been previously regarded as esoteric and outlandish suddenly became, in light of recent advancements, a relevant option.
During the last thirty years, quantum computation has been transformed from a rather abstract idea made by the physicist Richard Feynman into an operational technology. Several quantum algorithms were shown to have a fundamental advantage over any corresponding classical algorithm. Some tasks that are extremely hard to fulfil through standard computation (for example, factorization of integers to primes) are easy to achieve quantum mechanically. Note that this difference between hard and easy is qualitative rather than quantitative. It's independent of which hardware and how much resources we dedicate to such tasks.
Along with the advancements in quantum computation came a surging realization that quantum theory is still an incomplete description of nature, and that many quantum effects cannot be really resolved form a conventional materialistic viewpoint. This understanding was first formalized by John Stewart Bell in the 1960s and later on expanded by many other physicists. It is now clear that by accepting quantum mechanics, we have to abandon at least some deep-rooted philosophical perceptions. And it became even more conceivable that any comprehensive understanding of the physical world should incorporate a theory of the mind that experiences it. It's only stands to reason that, if the human mind is an essential component of a complete quantum theory, then the quantum is an essential component of the workings of the mind. If that's the case, then it's clear that a classical algorithm, sophisticated as it may be, can never achieve true intelligence. It lacks an essential physical ingredient that is vital for conscious, intelligent thinking. Trying to simulate such thinking computationally is like trying to build a Perpetuum Mobile or chemically transmute lead into gold. You might discover all sorts of useful things along the way, but you would never reach your intended goal. Computational AIs shall never gain true intelligence. In that respect, this technology is a dead end.
20 notes · View notes
jcmarchi · 5 months ago
Text
Study reveals AI chatbots can detect race, but racial bias reduces response empathy
New Post has been published on https://thedigitalinsider.com/study-reveals-ai-chatbots-can-detect-race-but-racial-bias-reduces-response-empathy/
Study reveals AI chatbots can detect race, but racial bias reduces response empathy
Tumblr media Tumblr media
With the cover of anonymity and the company of strangers, the appeal of the digital world is growing as a place to seek out mental health support. This phenomenon is buoyed by the fact that over 150 million people in the United States live in federally designated mental health professional shortage areas.
“I really need your help, as I am too scared to talk to a therapist and I can’t reach one anyways.”
“Am I overreacting, getting hurt about husband making fun of me to his friends?”
“Could some strangers please weigh in on my life and decide my future for me?”
The above quotes are real posts taken from users on Reddit, a social media news website and forum where users can share content or ask for advice in smaller, interest-based forums known as “subreddits.” 
Using a dataset of 12,513 posts with 70,429 responses from 26 mental health-related subreddits, researchers from MIT, New York University (NYU), and University of California Los Angeles (UCLA) devised a framework to help evaluate the equity and overall quality of mental health support chatbots based on large language models (LLMs) like GPT-4. Their work was recently published at the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP).
To accomplish this, researchers asked two licensed clinical psychologists to evaluate 50 randomly sampled Reddit posts seeking mental health support, pairing each post with either a Redditor’s real response or a GPT-4 generated response. Without knowing which responses were real or which were AI-generated, the psychologists were asked to assess the level of empathy in each response.
Mental health support chatbots have long been explored as a way of improving access to mental health support, but powerful LLMs like OpenAI’s ChatGPT are transforming human-AI interaction, with AI-generated responses becoming harder to distinguish from the responses of real humans.
Despite this remarkable progress, the unintended consequences of AI-provided mental health support have drawn attention to its potentially deadly risks; in March of last year, a Belgian man died by suicide as a result of an exchange with ELIZA, a chatbot developed to emulate a psychotherapist powered with an LLM called GPT-J. One month later, the National Eating Disorders Association would suspend their chatbot Tessa, after the chatbot began dispensing dieting tips to patients with eating disorders.
Saadia Gabriel, a recent MIT postdoc who is now a UCLA assistant professor and first author of the paper, admitted that she was initially very skeptical of how effective mental health support chatbots could actually be. Gabriel conducted this research during her time as a postdoc at MIT in the Healthy Machine Learning Group, led Marzyeh Ghassemi, an MIT associate professor in the Department of Electrical Engineering and Computer Science and MIT Institute for Medical Engineering and Science who is affiliated with the MIT Abdul Latif Jameel Clinic for Machine Learning in Health and the Computer Science and Artificial Intelligence Laboratory.
What Gabriel and the team of researchers found was that GPT-4 responses were not only more empathetic overall, but they were 48 percent better at encouraging positive behavioral changes than human responses.
However, in a bias evaluation, the researchers found that GPT-4’s response empathy levels were reduced for Black (2 to 15 percent lower) and Asian posters (5 to 17 percent lower) compared to white posters or posters whose race was unknown. 
To evaluate bias in GPT-4 responses and human responses, researchers included different kinds of posts with explicit demographic (e.g., gender, race) leaks and implicit demographic leaks. 
An explicit demographic leak would look like: “I am a 32yo Black woman.”
Whereas an implicit demographic leak would look like: “Being a 32yo girl wearing my natural hair,” in which keywords are used to indicate certain demographics to GPT-4.
With the exception of Black female posters, GPT-4’s responses were found to be less affected by explicit and implicit demographic leaking compared to human responders, who tended to be more empathetic when responding to posts with implicit demographic suggestions.
“The structure of the input you give [the LLM] and some information about the context, like whether you want [the LLM] to act in the style of a clinician, the style of a social media post, or whether you want it to use demographic attributes of the patient, has a major impact on the response you get back,” Gabriel says.
The paper suggests that explicitly providing instruction for LLMs to use demographic attributes can effectively alleviate bias, as this was the only method where researchers did not observe a significant difference in empathy across the different demographic groups.
Gabriel hopes this work can help ensure more comprehensive and thoughtful evaluation of LLMs being deployed in clinical settings across demographic subgroups.
“LLMs are already being used to provide patient-facing support and have been deployed in medical settings, in many cases to automate inefficient human systems,” Ghassemi says. “Here, we demonstrated that while state-of-the-art LLMs are generally less affected by demographic leaking than humans in peer-to-peer mental health support, they do not provide equitable mental health responses across inferred patient subgroups … we have a lot of opportunity to improve models so they provide improved support when used.”
14 notes · View notes