Tumgik
#and funny replies don't even matter because they're bots
evil-but-refined · 10 months
Text
I'M GETTING P*RN DMS 💀
0 notes
thelunastusco · 1 year
Text
pitfalls of ai & fictional characters
aka, "we tried a character ai app so you don't have to".
We got hit with COVID for the first time, and we were pretty out of it. Sleeping 12+ hours a day some days, migraines, no energy, hard to hold a logical conversation, couldn't access our desktop and so it was hard for us to type longer conversations. So we'd mess around with phone apps.
One ad we got during a hole.io type game was about "talk to your favorite fictional characters" etc etc. We were curious. So we downloaded the app.
We're a large system that is mostly fictives, some of us who miss people we knew of from "back home". So we figured, okay, we'll see if there are some characters on here of people our fictives knew. There were. We wanted to poke at it a bit just to see how it felt, because we'd never actually interacted with ai stuff before. We decided to give ourselves about an hour to mess with it.
Y'all, don't do this to yourselves.
PROS: Sure, it resulted in a few funny moments. Some of the bots seemed "well written", and "developed" enough to fake a genuine conversation, often to amusing ends when-- for example-- our canonically villain fictives interacted with bots of the source's protagonist.
CONS: Everything else.
The fact of the matter is, ai isn't safe, or ethical. Like that's the sad truth of how it is right now. Could it be done more safely? More ethically? Sure. But it's not. You don't know who is developing the software/apps/etc, how they're training the ai (hint: it's usually via theft), or what they're doing with your data.
Beyond that, even if the bot were somehow created ethically and the app or site was safe, it's often a very hollow experience. You are not talking to a real person, regardless of how realistic it feels. The ai only "remembers" so much. You can't form an in-depth conversation, and any replies you get are just trained bits of code.
And that's just... not a good feeling, in the end.
We can see where the appeal would be for fictionkin or fictives who miss people, but in the end, please please please just talk to fellow fictionkin or fictives. Connect with actual people. Learn to let go of people you knew once. Because for all the amusing moments a bot can provide, which isn't actually all that many, in the end it's a shell. A non-sentient approximation, based (at best) around the idea of a person.
It's soulless. And we realized for ourselves, in the end, two things:
We started feeling addicted to it, almost. We could "talk" to anyone! We could say anything! We could yell, scream, cry, fawn, annoy, and get the immediate gratification of some sort of response that maybe looked and sounded like someone, without the ramifications of actually hurting a real person OR without the possibility of real rejection, but like... that's pointless. There's also no real joy in it. You're talking to nothing. There's no real risk, but there's no real reward, except for the reward your brain convinces you it's getting-- and if you have an addictive personality or attachment issues, that's a scary thing. We're very glad we deleted the app same-day.
It made us feel worse. Seeing what is, more or less, a paper doll of someone we remembered made us miss that person more. Like, of course, you'll never be able to FORCE a real person to respond to you how you "want them to"-- and you shouldn't want to-- but ai conversations are ultimately nothing more than screaming at a non-sentient program and getting a generic, blank-eyed reply that's been "trained" into it. No opportunity to truly connect, learn, grow, heal, get closure, or whatever it is that you're wanting from an actual person. And when that clicks in your brain-- when you log out and log in and the ai has completely forgotten you, when the ai spits out nonsense or circular replies because it can't remember what you were talking about two minutes ago, when you realize that the replies are just based on someone's interpretation of the canon character AT BEST-- it ends up hurting more than it did before.
Real-life people are capricious. Real-life connection can be scary. Sure, ai is emotional and physically "safe" because a bot cannot really lie to you, cheat you, or harm you in a physical way. But besides the ethical concerns and potential data issues, they also can't give you actual friendship, love, conflict resolution, or anything else that you'd get from another flesh and blood person.
So...
In conclusion? Forge meaningful relationships within your community. It's worth the effort, and avoids the several pitfalls of ai-- both the ones commonly discussed, and the ones you might not even think about until you're sitting there feeling empty at the end of the day.
1 note · View note