Tumgik
#seeing people talk about skynet shit again making me think we've lost track of the parameters of this conversation a little bit
Text
tbh i feel like any discussion about AI, like, "being alive" or "gaining sentience" or "going rogue" has to come from a baseline assumption of what sentience or agency are and how they develop in organic things because, for us, a very very long evolution of more and more complex thought came a long time before we started developing speech, and before thought of any kind came a long development of involuntary response to stimuli. it's difficult to like-for-like compare this to something that was taught to speak as the primary function and not as an accessory to mental interiority, itself an accessory to involuntary response to stimuli, but the implication that anything in any remote vicinity of chat-GPT has interiority means either deriving that there might be thoughts creating that speech, and there might be involuntary responses to stimuli creating those thoughts; or that continuing to develop something that can speak better necessarily implies creating something that has interiority (also implying that we have the capability to do that, and the interest in doing that) and i think, like, we can pretty objectively say the first is false, right. no matter how much you verbally torture a chatbot and make it say it feels bad i don't think it actually does feel bad. i think crying and screaming are of equal value to laughing at a joke, or discussing math. all of those are highly correlated outputs for specific inputs. i don't think theres any reason to assume a chatbot understands, gives any significance to, or internally mirrors how differently humans would see those scenarios.
if you actually did develop an ai with the intention of having capability for thought, and once that was sufficiently advanced, gave it the capability of speech (or some other human interface) (which i think is an inevitable eventual step in the pursuit of coherency and usefulness of ais, to be able to identify concepts and their correlations and not just the words associated with concepts- knowing the concept of counting and being able to mentally proof that 2+2=4, instead of believing 2+2 must =4 because youve heard it about a billion times and 2+2=7 a lot less times; im pretty sure empirically we know this isn't how GPT works. you could argue it has limited "understanding" of "concepts" but i don't think you could say its speech is consistently/ universally a byproduct of higher order thought) the question of agency becomes more relevant and complex because that does constitute some amount of interiority (although it seems more likely that we'll probably continue on the path of "speaking when spoken to" for the nice polished commercial bots, which precludes like, sitting around and thinking about stuff) but i still don't think that implies life or sentience, because i do think those are still predicated on the ability to "feel" stimulus (without the chemical or instinctual motivators for pain, happiness, or self-reproduction we as living animals would not do anything) and i dont think the existence of thought implies the existence of feeling either. and like, at the point this exists, there is a good case for treating something like this as if it does genuinely feel the feelings that would imply its actions, just because the relationship of input to output being consistent with human reactions means that if you did something hurtful it would respond as if it was hurt, and at minimum that could create difficulty working with it, but like, i think theres still a difference between needing to treat something in a certain way and like, having a reason to feel bad for it if it acts hurt or something. you can already feel bad for an Alexa but like, we know it's not really your friend who likes you.
in the indie ai scene i do see a lot of exploration of the concept of giving ais "emotions" as- at least a facsimile of- that response to stimuli that guide behaviors, and that's the point at which i think this kind of conversation is interesting or carries any weight. i think it would be very difficult to make an honest recreation of the human set of feelings, and what does psychological pain even mean to something without a physical body that can get hurt to necessitate an understanding of pain, and so on, but i think that's a different conversation than the existence of a set of "feelings" at all (and i think a living feeling being with a fundamentally different set of "feelings" than any organic life is at least as interesting a conversation as a genuine reproduction of the human psyche) although i still think the rigor of proving there is some consistent internal motivation in response to stimulus and not just weights of what blindly neutral words come out has never been satisfyingly defined yet. and even at that point i think it's more likely to be able to create something on the level of like, a bug, than a mammal, and the question of harm is still not super obvious. i don't really expect commercial ai to explore in this direction much because it's not useful at all for automation and its use for leisure chatbots is at least fraught and weird. but i like watching this space and i really look forward to how this develops as we get more powerful and efficient chatbots and especially in the event we figure out how to make bots that actually do learn and have interiority (at the "thoughts and concepts" level)
1 note · View note