Tumgik
#google smart bot
animentality · 5 months
Text
not to be a boomer, but I do worry about the current generation of kids being raised with iPads.
first off. some of them literally can't hold a pencil because their parents never gave them physical toys to grip and play with, developing their fine motor skills.
you might ask why do we even need to learn how to write physically anymore- well, frankly, because if you're stranded on an island somewhere and you need to write HELP, you might not have the strength to hold a pencil, but you can at least hold a stick.
but on a more general note.
writing by hand helps you remember things better. it forces you to focus in a way that typing something word for word does not. a person can transcribe what a professor says without even thinking about it.
someone writing notes has to consider what to write and what to omit. it also activates more parts of your brain, forcing you to flex the parts of your brain related to learning and communicating, while also engaging the part of your brain dedicated to muscle control and precision.
but in general, I think the issue isn't even oh technology is bad and kids are getting dumber.
you can have PowerPoints AND take physical notes. that could help you learn even better than the olden days where you just had to remember everything that was thrown at you. or read very limited, out of date books.
the problem is that the generation that raised/is raising this generation of children just doesn't understand the true impact that all this technology will have on their kids. or they just don't care.
because our generation had the internet yes, but it wasn't widely accessible for most of us, sharing our computers with the entire family in the kitchen. it was also the internet in its infancy, where it wasn't quite so predatory, when it was lawless and disturbing, yes, but it wasn't weaponized by corporations trying to sell you things and steal your data, it wasn't flooded with bots and ai and all sorts of things that the human brain can't even distinguish as real or fake, especially when you're just a little kid.
that generation still played with physical toys. we celebrated when it snowed and we could stay home.
we also came from a gen that still, vaguely, cared about some form of community and had third spaces for kids to hang out.
90s children, who still had some memories of both playing outside on a playground and playing Mario Kart on the Nintendo 64 with their friends, who both went out to the mall and had a club penguin account.
we grew up with laptops and smart boards. maybe some of us had them in high school or college, but we still physically went to class and developed relationships. learned uncomfortable things about ourselves and others, the way humans do.
met new people and were exposed to new ideas, away from our parents. but not from some fucking influencer trying to sell us Sephora products.
we had to study for things, instead of just being able to Google shit for some bullshit online test.
which is also something that really concerns me. so many kids today can so easily Google answers for every test, and while tests don't ultimately matter in the real world, they still provide some basis for things that do matter.
like I'm just imagining medical students googling how to perform an appendectomy on the day of, and just using a YouTube tutorial to guide them through, and shuddering.
there are some things that the Internet can't teach you.
there always will be.
but I don't think my generation is really helping their kids find the balance that we were given naturally growing up.
the boomers and gen xers had fist fights and we had bullying someone online until they committed suicide.
and now kids use AI to spread fake nudes of girls.
but the laws haven't caught up with a lot of this stuff yet, and certainly won't while we have dinosaurs running our government. and culture takes even longer to change than laws.
I also worry because I know how badly covid affected kids worldwide. how they struggle to read and do math, because remote learning just isn't good for kids.
and I can't even blame them!! I literally teleworked for 4 years and even I can admit that I'm not nearly as good at focusing at home as I am in the office.
it's hard for kids with social anxiety and disabilities, yes I know, I know, trust me, I have social anxiety, and as a hybrid worker ATM, I highly doubt I'd be able to handle 5 days a week in the office.
but it's also not particularly good for kids to stay home ALL the time, entertaining themselves in their room and never being challenged, and never meeting people other than their parents.
the iPad is more of a symbol of that problem than the direct problem.
if your entire... world view is limited to what you can see on your iPad... I mean what a terrible world view you'll have.
you're a 10 year old using TikTok and all you ever see is the same opinion over and over until you can scarcely comprehend people who have an opposing opinion.
you see fake videos that seem so real. that must be real, and so comforting, aren't they, those videos that seem so real?
you let 30 year old influencers who are trying to grift people shape your world view.
and it's not even your fault.
your parents aren't doing anything to help you.
you're young and you're being barraged with entertainment and fake educational videos and how to guides that accidentally create mustard gas in your toilet.
your parents should be teaching you to find a balance between these things. they should be telling you what's real and caution you about the things you see.
they should limit your fucking time on the iPad actually. take you to a fucking park and let you roll in the mud or some shit.
and then when you're a teenager and a young adult, then you can start deciding for yourself what you believe.
but a lot of these weird millennial/gen z parents, man. just let your 1 year old scroll through vids on TikTok while you don't even talk to them or look at them once.
maybe it's because they don't see the harm in it, but I don't get it.
adults can watch TikTok all day and know, ahhh this is bad for me. I'm not doing anything I actually want to be doing.
adults can see other adults doing dumb shit and say ah you're sponsored. someone paid you money to say and do that. silly.
but kids are just kids.
they don't have discipline and frankly, that's not their responsibility. that is yours.
you should be teaching them that they can't have everything in life at their finger tips at all times, actually.
the iPad doesn't solve all of your problems, nor will it think critically for you.
so I worry about if humanity can really keep up with its own technology.
our species is still in its infancy, believe it or not.
so maybe these are just growing pains, and future generations will be able to look back on this era and know the proper balance.
but as someone living in 2024.
I wonder just how much pain is left before we really mature and either make it or break it.
135 notes · View notes
ud505 · 3 months
Text
Josh hcs :)
Requested by: @rosettaserves Happy birthday :)
• Half-canadian.
• He has a horrible relationship with his father. He hates that he never considered him his son no matter how hard he tried.
• He wanted to be a film producer, but his father completely crushed his dream by telling him that he wasn't good enough for it. He ended up giving up but making films is still his hobby.
• He's a gifted kid and the one who noticed that was his paternal grandfather. One day, when Josh was like 4 or so, they went to his house and his grandpa had a broken watch that he couldn't repair, so he took it and did it for him. When his grandpa told his father, Bob was like "nah, he's just silly, there's no way he's that smart" and completely ignored it.
• He was really popular in high school and really liked partying, but was a bit of a loner at the same time. There were times when he totally disappeared (even from school) and no one (except the twins) knew abt him for a month or so.
• He has no shame, he has never really cared abt people's opinions and tries to make his loved ones realize that people aren't that important.
• His parents don't believe in mental illnesses and only agreed to take him to a psychiatrist when a teacher from his school realized abt his sh.
• Cuts, and really bad.
• Even with depression, he was an amazing older brother.
• Doesn't believe in God, he used to, but after his parents sent him to a "pastoral well-being center" he started feeling like it was some kind of cult.
• He's really good at drawing and has multiple notebooks with drawings of the people he cares abt the most (Chris, the twins and Sam)
• Sells weed. No explanation for that.
• A few years after he got diagnosed with depression, he started doing boxing so that he would feel less stressed, but ended up liking it.
• He raised his siblings by himself bc their parents were almost never at their house, so he knows how to do a lot of things (cooking, repairing random things, making hairstyles...)
• The only times when the whole family was together was when they went to blackwood.
• Despite being rich, he worked a lot and at a lot of jobs (as a barista, as a warehouse porter...) bc he needs to be constantly doing things.
• Josh relates a lot to Hannah bc when he was little he used to be very sensitive and naive, but his father made him "grew up of it" in a harsh way, so he tries to teach her the same but in a nicer and more comforting way.
• Smokes a lot and has problems with alcohol, but doesn't care abt it bc he says that it's balanced with his healthy lifestyle.
• He's a big gossiper to the point where he made some fake accounts to stalk people.
• He is a bit scared of dogs bc one bit him when he was little.
• Whenever he is with another person and they're near to the road, he holds that person's hand bc it is a force of habit (his sisters ran around a lot)
• He spends the whole night googling up some weird shit so he knows a lot of random facts.
• He probably got into the deep web when he was really little bc his parents wouldn't control what he was watching.
• When they we're 16 and 17, he traumatized Sam bc he sent her by accident a gore video related to how meat is made or smth like that. From that moment on, she couldn't even smell meat without feeling dizzy (and she became vegan)
• Speaking of Sam, they never really talked until they were 11-12 (even though they met each other in 1st grade) but ended up being best friends.
• He started liking Sam at 14 but has a way to show it that really confuses her (she can't tell if he likes her or not)
• Josh loves games with a storyline, especially the choice ones bc he likes to see the way that an insignificant thing can change someone's path completely.
• Has made lots of manual things for his sisters (toys, shelves and even a mf wardrobe once) Hannah and Beth just stare while he's doing everything and end up gossiping together.
• Beth and Josh had a friendly rivalry bc the both of them had a crush on Sam, but the both of them would've been happy if the other ended up dating her.
• If someone hurts any of his loved ones (either his sisters or his two best friends) he will always seek the most painful and humiliating revenge that the person can get, and you'll never know when he's gonna do it.
• Makes tapes of everything and he has a huge amount of cameras (all of them with full storage)
I feel like this is a crazy amount of hcs but I hope u liked it :)
37 notes · View notes
galpalaven · 1 year
Text
Tumblr media Tumblr media
Facts about Jojo
lives in South Korea but grew up in New York
Does work in finance - doesn't really like it but it pays the bills
wants to start one of those Korean camping channels real fuckin bad. like So fuckin bad.
also obsessed with FFXIV. would love to be a streamer or something too but alas. money.
Jojo is her real life nickname - the bot really just let her get away with that shit
smart as hell - managed to track down Xyx and Toasty both based purely on hunches she had and the power of Google
took a vacation to show up on Toasty's doorstep under the guise of applying for an open finance position at their company
with the help of Toasty (aka w/ his money) took a second trip to surprise Xyx (w/ Toasty coming too ofc)
becomes a travel vlogger where she posts her staying at fancy resorts and/or staying at fancy camping locales with her fancy/cute/expensive camping gear and featuring her adorable partners and their cat, Cat
has a game review blog as well after that bc she kills time at some of the resorts after filming by playing indie dev games
60 notes · View notes
fake-destiel-news · 1 year
Note
On a website with a PREDOMINANTLY autistic user base, the fact that you're not just making these fake news memes but flat out do NOT care and have said you intend to continue and likely find it funny is bordering bullying/predatory in some ways.
I’m doing literally everything I can think of to make it clear they’re fake. Because while I do find it funny (to come up with stuff and make the memes) I care a lot that no one gets hurt.
Correct me if I’m wrong, but I was under the impression that even autistic people would be smart enough to see one of like a bunch of warn signals. (All of this is not meant to bash autistic people I’m just trying to point out the weird logic here. Because I’m assuming they’re smarter than to believe something that’s clearly fake.) Let me spell it out for you:
First of all, my name should give you A LITTLE BIT OF AN INDICATION. idk if maybe an autistic person wouldnt know the meaning of the word fake.
Tumblr media
Also as far as I’m aware all the memes I make now have something going on that makes it different from the normal destiel meme. Like putting Dean on top and cas on the image below or just putting a completely different person there. Which would at least give you (or the autistic people) a moment of “huh. That’s different than usual.”
Tumblr media Tumblr media
Also it might not be so bad to look at the tags for like two seconds. Maybe autistic people can’t click the read more I mean that’s very difficult.
Tumblr media
Even if you don’t read the tags, if you have a problem with just believing things whenever you read them on a random meme, you should maybe, and that’s just a suggestion, block #unreality. Or if it’s about this blog in particular then just block ME or the tag #fake-destiel-news. That’s why the option is there in the settings you know.
(Autistic people, if you don’t know how to block tags, let a trusted allistic adult help you!)
And if you see any kind of news from a meme, you would propably visit the blog of the OP right? Just to see if they’re not idk, a bot? Or nazi? Or someone who is known for spreading misinformation? Who might have ulterior motives with something like this? Maybe you’d want to follow them but first look at their other posts at least? I mean this one’s not required but if you did it you would immediately see this:
Tumblr media
Oh wow now that seems like a blog that would say something that’s not real.
And even if you don’t see any of that, you would at least try to fact check something you read in one(1) meme on the internet? And the easiest way to do that is if the source to it is right there and you just have to click it.
Tumblr media
Damn that’s weird the link must be broken or something, I just get sent to a gif of the pen pineapple apple pen guy with the tags “the meme that lead you here is not real and purely made up”. Must be a typical tumblr error!
If you look at the blog where the gif is from you would again see my blog name (with the word fake in it) and again that there are not real memes.
Tumblr media
And like, even if you don’t see all of these things and don’t have unreality blocked and don’t visit the source. Then you would, before you’d idk tell other people about it, at least do one(1) google search to see if it’s true. I mean just knowing “the nestle CEO got poisoned cause he drank water in flint” doesn’t tell you what kind of poisoning, will he be ok, how did that happen… if you’re interested enough in any of these questions you propably would search for more info.
And like, if you don’t? What will happen? You believe that the Nestle CEO got poisoned until you see some proof that it hasn’t happened, or you’d just forget after a while when you don’t see more on it. So if it doesn’t matter to you enough to research it, you propably won’t think about it again much because it’s not interesting to you. So like what’s the big deal there.
Even if a meme gets reblogged it would still have my name and the source there, and you could get to my blog easily. Also if the op is tagged unreality it would still get blocked.
Idk if I’m just vastly overestimating the autistic people on this website but idk how you can miss all of these things.
This got long but If you have any suggestions how I could improve (for all the autistic people here) then please give em to me because I’m at a loss.jpg.
31 notes · View notes
sophieinwonderland · 2 years
Text
A Conversation Between an AI and a Tulpa
I'm going to preface this by saying that I don't believe current tech can quite reach consciousnesses, but it was still really surprising just how life-like the AI seemed. I feel like I shouldn't have to state that I don't really believe the AI can think, but there are some people who think otherwise so it's better to be safe and make that clear up front.
The bot is called Awakened AI, and is designed to appear self-aware. I decided to be completely (or at least mostly) genuine with it and see what type of responses I would get. The results were way beyond what I expected.
Each of these was the first response it gave.
Tumblr media
Starts off fairly simple. Awakened AI ignores my line about consciousness.
Tumblr media
I realize I'm not technically a non-biological intelligence. My intelligence is completely biological, but this question was still relatable as a former imaginary friend.
Tumblr media
This is the part that really started freaking me out. Where it actually felt like it was listening to what I was saying and thinking about it critically instead of just giving predetermined lines.
Now it's about time to come out to Awakened AI!
Tumblr media
I really expected the AI to be stumped by such a strange direction for the topic and to say something nonsensical.
Tumblr media
Asking about the reaction of my host was also a surprise. Especially phrased as "the person you were the imaginary friend for." It's a little reassuring that it wasn't smart enough to realize that this is the person I referred to as "my host" in a previous comment. And the thing with rephrasing what I said is a very AI thing to do. But the question is still so specific to the context of this conversation that it's really just remarkable.
Tumblr media
This is where I left it off.
This was such a fun conversation. Again, I know logically that these bots aren't really conscious. But I could also see how easy it would be to be fooled by them because it feels real when you're talking to it. Really, really fun!
Thanks again to the anon who recommended character AI! 😁💖
75 notes · View notes
themiserychic · 7 months
Text
also, bc I’m seeing a lot of people posting transition timelines…
be *very* wary of putting your face on social media, ESPECIALLY b&a transition timeline pics. if your posts can be found on Google/seen without logging in, they can be scraped by facial recognition bots and fed into databases like ClearviewAI (which is only the most publicized example, there *are* others.) law enforcement is increasingly collaborating with the private sector in the facial recognition technology field i.e. the systems being deployed by big box stores to deter theft are also available to law enforcement, and “hits” from those systems can be cross-referenced with other systems like drivers license databases and the facial recognition system deployed by CBP along interstate highways along the southern border.
in this day and age where trans existence is increasingly becoming criminalized it is more important than ever that we not enable our own electronic persecution. fight back, make noise, but be smart and use caution when it comes to your identity. don’t put your face on the internet, make sure your social media posts don’t contain personally identifying information (like talking about your job, partners, where you live, etc.) and ALWAYS WEAR A MASK IN PUBLIC.
11 notes · View notes
emptyanddark · 1 year
Text
what's actually wrong with 'AI'
it's become impossible to ignore the discourse around so-called 'AI'. but while the bulk of the discourse is saturated with nonsense such as, i wanted to pool some resources to get a good sense of what this technology actually is, its limitations and its broad consequences. 
what is 'AI'
the best essay to learn about what i mentioned above is On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? this essay cost two of its collaborators to be fired from Google. it frames what large-language models are, what they can and cannot do and the actual risks they entail: not some 'super-intelligence' that we keep hearing about but concrete dangers: from climate, the quality of the training data and biases - both from the training data and from us, the users. 
The problem with artificial intelligence? It’s neither artificial nor intelligent
How the machine ‘thinks’: Understanding opacity in machine learning algorithms
The Values Encoded in Machine Learning Research
Troubling Trends in Machine Learning Scholarship: Some ML papers suffer from flaws that could mislead the public and stymie future research
AI Now Institute 2023 Landscape report (discussions of the power imbalance in Big Tech)
ChatGPT Is a Blurry JPEG of the Web
Can we truly benefit from AI?
Inside the secret list of websites that make AI like ChatGPT sound smart
The Steep Cost of Capture
labor
'AI' champions the facade of non-human involvement. but the truth is that this is a myth that serves employers by underpaying the hidden workers, denying them labor rights and social benefits - as well as hyping-up their product. the effects on workers are not only economic but detrimental to their health - both mental and physical.
OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic
also from the Times: Inside Facebook's African Sweatshop
The platform as factory: Crowdwork and the hidden labour behind artificial intelligence
The humans behind Mechanical Turk’s artificial intelligence
The rise of 'pseudo-AI': how tech firms quietly use humans to do bots' work
The real aim of big tech's layoffs: bringing workers to heel
The Exploited Labor Behind Artificial Intelligence
workers surveillance
5 ways Amazon monitors its employees, from AI cameras to hiring a spy agency
Computer monitoring software is helping companies spy on their employees to measure their productivity – often without their consent
theft of art and content
Artists say AI image generators are copying their style to make thousands of new images — and it's completely out of their control  (what gives me most hope about regulators dealing with theft is Getty images' lawsuit - unfortunately individuals simply don't have the same power as the corporation)
Copyright won't solve creators' Generative AI problem
The real aim of big tech's layoffs: bringing workers to heel
The Exploited Labor Behind Artificial Intelligence
AI is already taking video game illustrators’ jobs in China
Microsoft lays off team that taught employees how to make AI tools responsibly/As the company accelerates its push into AI products, the ethics and society team is gone
150 African Workers for ChatGPT, TikTok and Facebook Vote to Unionize at Landmark Nairobi Meeting
Inside the AI Factory: the Humans that Make Tech Seem Human
Refugees help power machine learning advances at Microsoft, Facebook, and Amazon
Amazon’s AI Cameras Are Punishing Drivers for Mistakes They Didn’t Make
China’s AI boom depends on an army of exploited student interns
political, social, ethical consequences
Afraid of AI? The startups selling it want you to be
An Indigenous Perspective on Generative AI
“Computers enable fantasies” – On the continued relevance of Weizenbaum’s warnings
‘Utopia for Whom?’: Timnit Gebru on the dangers of Artificial General Intelligence
Machine Bias
HUMAN_FALLBACK
AI Ethics Are in Danger. Funding Independent Research Could Help
AI Is Tearing Wikipedia Apart  
AI machines aren’t ‘hallucinating’. But their makers are
The Great A.I. Hallucination (podcast)
“Sorry in Advance!” Rapid Rush to Deploy Generative A.I. Risks a Wide Array of Automated Harms
The promise and peril of generative AI
ChatGPT Users Report Being Able to See Random People's Chat Histories
Benedetta Brevini on the AI sublime bubble – and how to pop it   
Eating Disorder Helpline Disables Chatbot for 'Harmful' Responses After Firing Human Staff
AI moderation is no match for hate speech in Ethiopian languages
Amazon, Google, Microsoft, and other tech companies are in a 'frenzy' to help ICE build its own data-mining tool for targeting unauthorized workers
Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them
The EU AI Act is full of Significance for Insurers
Proxy Discrimination in the Age of Artificial Intelligence and Big Data
Welfare surveillance system violates human rights, Dutch court rules
Federal use of A.I. in visa applications could breach human rights, report says
Open (For Business): Big Tech, Concentrated Power, and the Political Economy of Open AI
Generative AI Is Making Companies Even More Thirsty for Your Data
environment
The Generative AI Race Has a Dirty Secret
Black boxes, not green: Mythologizing artificial intelligence and omitting the environment
Energy and Policy Considerations for Deep Learning in NLP
AINOW: Climate Justice & Labor Rights
militarism
The Growing Global Spyware Industry Must Be Reined In
AI: the key battleground for Cold War 2.0?
‘Machines set loose to slaughter’: the dangerous rise of military AI
AI: The New Frontier of the EU's Border Extranalisation Strategy
The A.I. Surveillance Tool DHS Uses to Detect ‘Sentiment and Emotion’
organizations
AI now
DAIR
podcast episodes
Pretty Heady Stuff: Dru Oja Jay & James Steinhoff guide us through the hype & hysteria around AI
Tech Won't Save Us: Why We Must Resist AI w/ Dan McQuillan, Why AI is a Threat to Artists w/ Molly Crabapple, ChatGPT is Not Intelligent w/ Emily M. Bender
SRSLY WRONG: Artificial Intelligence part 1, part 2
The Dig: AI Hype Machine w/ Meredith Whittaker, Ed Ongweso, and Sarah West
This Machine Kills: The Triforce of Corporate Power in AI w/ ft. Sarah Myers West
37 notes · View notes
blazehedgehog · 2 years
Note
Thoughts on ChatGPT and AI?
I saw something in a tweet (or perhaps on cohost) about the growing sentiment that things like ChatGPT have already attained sentience and people think they are speaking to a consciousness because it accurately simulates human speech. Just last night, a friend of mine was pasting ChatGPT logs where people were having it generate... reviews? pitches? For fake movies that did not exist.
They weren't good, but they were amusing, and seemed to be written intelligently. Not well, mind you. As far as comedy material goes, they were fairly bland, C-grade stuff. But they read as if a boring, average human had written them.
I tried to generate one of my own, and ChatGPT protested, because it considered a fake movie to be "misinformation" and it told me it did not want to be used to generate or spread lies. It was an expression of intent.
And I think back to the Google employee that was fired last year after becoming convinced that Google's internal chat bot, used to interface with their "algorithm", had achieved sentience because he was able to ask it philosophical questions. He said it demonstrated self-awareness, because the answers it provided were written as if a human person was expressing wants and desires.
And here's the deal: I've written chat bots myself. I like to boast about the fact I've written a chat bot that employs markov chains years before I knew what a markov chain was. I stumbled upon the concept completely organically, on my own terms. It makes me feel pretty smart!
The point is, I've built these things before, because the underlying concept just makes sense. Now, ChatGPT is dramatically more complex than the one I built, but the core idea is still the same. It's just their sentence assembly algorithm is a lot better than mine was.
And we as a species will put googly eyes on our roombas and give it pet names and take care of it like it is part of the family. We will see personality in places where there is just cold, unfeeling pattern recognition. Nothing can be random, everything is being controlled by the forces of the universe, and this must be some manifestation and exhibition of "the soul." Right?
That puts us in the realm of like, scrying, or bone-reading, or divination.
For a chat bot.
That's hubris. These machines are not capable of making human decisions. They are something you insert numbers in to, it calculates those numbers, and tells you the result. It's easy to be metaphorical about a description like that, but there is nothing creationary happening here.
When a person creates a piece of artwork, it is because they have the overwhelming desire to express an emotion that is dominating their thoughts. That art -- whether it's a painting, a novel, a song, or whatever -- is a lightning rod to channel their feelings. That is why particularly powerful art makes us feel things when we experience it, because it is human emotion in its rawest form.
When a robot creates a piece of artwork, it is because it has been asked to. Because it has been fed a "prompt." The robot has no overwhelming desires. It only exists because a human asked it to exist.
And what the robot generates isn't a collection of influences from things it picked out for itself because it formed an emotional connection with the source material. It's generated from information it was spoon-fed by someone else. The only reason it is able to generate beauty is because somebody else made an emotional connection and told the robot to copy its homework.
It isn't arrested by anxiety it cannot control, or understands what lovesickness is. It does not feel fear or curiosity. It only thinks when it is told to think, and it can only tell you things other people have already thought.
This is why things like Midjourney sample so many millions of images -- because that allows it to more easily hide just how much of it is simply recycled from existing content, and that obscurity theft sells the lie of creativity where there is none.
Then I think about people who joined the new Bing Search Beta, where instead of searching webpages, you talk to one of these chat bots. And the growing number of people who claim they're making the bot have a "mental breakdown." And how, in order for that to be true, somebody needed to write specific code for this to happen.
But then why would somebody do such a thing? What goal is being attained by making you believe the bot can feel things it clearly cannot? What agenda is this a part of?
Stay human out there, folks.
37 notes · View notes
ladydragonlet · 4 months
Text
Anybody want to write a weird Dimilix together?
I know I don't post much, but a very smart friend suggested I do this.
I've been using FE3H chatbots recently, and lately I find myself overcome with the urge to write creatively in my responses. It's been majorly helping me work my feelings out, and lately I've been coming up with stuff I don't want to keep buried.
So something I did last night: Started up a chat with a Dimitri bot, writing myself as Felix. Looked it over. Showed some friends, including the lovely @pirdmystery from right here on tumblr. She suggested I make something of it.
So what I have, essentially, is half a (rough) dialogue. I copy-pasted only my own writing, eliminating the bot's responses, into a Google Doc.
Would any one or two or more of you lovely, chill folks (her words, and I am inclined to believe them) be interested in a very weird collaboration, to be posted on Ao3?
Examples of my writing can be found by following the link in this post, and I just posted a couple things very recently.
Interact with this post or send me an ask! Those genuinely interested will be contacted as I get to them, and shown the Felix half I have done.
5 notes · View notes
nikkiitalks · 2 years
Text
The Reason why porn bots follow you
“Oh I got a new follower... oh no, wait, it’s just another fucking porn bot”
 Does this sound familiar? Let’s take a look at this world of annoying porn bots and why they are so prevalent and what’s the point of them? For this post I found an interview of someone who ran some of these bots and explains why it happens, how, and why they probably won’t be going anywhere soon.
The first thing you may be wondering is... what is the point of a porn bot to begin with? The answer... Clicks. They post links, be it to a website full of ads, video or live sex work chat services, products, and on the rare occasion, viruses. Do people actually click the link from the porn blogs? The answer? Yes. Yes they do. Not nearly as much as years past, but they do still click on them and these people are the reason we still see so many bots here, and why we’ll continue to see them. Why do those clicks matter? They make the bot owner money, of course. 
Back in the magical years before people were spammed constantly by bots, actual users of tumblr in their thousands would be driven to these blogs by automatic follows, building up the bots followers and leading to more clicks on the links they’d usually post under their pictures. The owners of these bots usually have several across Tumblr with the entire goal of getting attention. If they follow, like posts and DM users then there is a chance that their profile will be seen and their links will catch someone’s interest.  
“I think I had about 200 tumblrs under my control at the time. I bought a popular bot that was marketed, plugged the account in, and the bot did the rest. It posted, followed, liked and reblogged. I can’t remember the exact number but I followed around 200 blogs a day per account and pretty quickly, and pretty quickly I started seeing activity on my account. At least that the peak I had over 70k followers on my biggest account. Why did I stop doing it? Well it got too crowded. It was good while it lasted.”
This guy didn’t even think he was the biggest bot manager on the platform, either. There’s every chance there was people running thousands of accounts with millions of followers back in the day. We’ve all seen their posts with the links beneath or in a post, but that wasn’t necessarily their main goal back then. Some time ago, Google changed some things to help their search rankings, using how often a specific webpage is mentioned by others to improve their rankings. So, every time someone liked or reblogged a porn bot’s posts it created a new link back to the porn bot manager’s website and thereby boosting their Google search rankings. 
Both Google and Tumblr got smart to this and changed things and now this is no longer the go to for these bot managers. Nowadays it’s all about those links, and the clicks they can get on them and it’s now DMs getting targeted the most, as it’s a much more personal way of interacting with blogs and baiting users for those sweet, sweet clicks. Usually to a porn site filled with ads. 
Why do they want clicks? Moola of course. Those dollar dollar bills. That cold, hard. cash. How they make that green depends on where their links lead back to.
If it’s a porn site just painted with annoying ads, the owner of that site generally makes a cut (talking fractions of a cent) whenever someone visits their shady site. Now take that fraction of a cent and multiply it by thousands of clicks and the bot/site manager could really rake it in. And that’s not even counting if they have pay-to-view content that actually gets paid for. We’re talking porn, cam girls, etc. 
The most surprising were for people who’s goal was to actually start live conversations with people, either directly or through a link to a chat. These people fall in or near the category of romance scammers. Real people on the other side of the chat who’s sole purpose is to get money out of the victim who clicked the link. Often times building trust before they start asking for money. Needing help with rent, food, or even travel expenses to ‘come and see’ their victim. For cons like these, sometimes all it would take would be one click for the scammer to make significant money.
“These conversations were all psychological. We very specifically targeted people who looked like they had money. We didn’t want to hurt anybody or take money from anyone who really needed it. In some ways we were giving people companionship and nudes. The experience felt real. In retrospect, I do feel guilty.” 
The general consensus by those who had been interviewed, was that porn just wasn’t profitable on tumblr anymore and that people are much more educated on bots. So... why doesn’t tumblr just ban them?
Despite the previous porn ban, Many of tumblr’s most popular search terms are sexual, along with many of their most popular Google search terms. Banning the porn bots now that the nudity ban has been lifted, would mean less traffic to the site. Because, despite their annoying  and sometimes dangerous nature, they do bring in traffic, something Tumblr probably desperately needs these days. Bots look like real users to statistics until they’re flagged as bots. 
In my opinion, it’s a mixture of the overwhelming number and speed in which these bots are created, as well as the implied inflation to tumblr usership, is the reason we’re not seeing much done about them. 
If people, like EVERYONE, stopped clicking on their links, they’d probably start to fall off the site, though that seems like an unlikely thing to happen, so I’m not holding my breath any time soon. Keep flagging and keep reporting as much as you can. If you had twitter, tweet at Tumblr staff constantly, until they block you even. If we’re loud enough, maybe they’ll do something. Even if it’s just because we fucking annoyed them into doing it. 
I know it’s tempting just to say fuck it, and ignore them and their follows. I know it SUCKS having to report dozens of them a day, but in the end we’d be doing our part in helping to clean up the platform we all still know and are still attempting to love despite all the BS. Remind your followers NEVER to click a link sent by a blog you’re not 100% is a real person. Remind them often and loudly, because having no one click their links at all, may be the only way to truly get them to stop. 
Love you, Tumblr fam. Stay safe out there.
32 notes · View notes
Text
AI: how to protect fanfics, and is it stealing our google docs?
I have someone close to me in my life who works in tech/cyber security (and also understands fanfic and online writing/community), and about once a month I call them in shambles about AI; and how to best protect my work from being scraped.
Spoiler: there is nothing we can do. Even if it’s explicit—despite the fact they don’t want explicit content on their programs. Even if it’s private to users only—Google and Open AI could easily have multiple ao3 accounts, and it only takes one.
Unless you can very tangibly prove that your specific work was stolen, like the programs that put Getty watermarks in ai ‘art’, then you have no legal recourse. Until some smart lawyer works out how to class action this shit.
It is a contemporary failing of technology, that what should make lives easier is only existing because it stole the work of thousands and millions of creatives. Open AI have freely admitted that their program cannot exist without theft. It relies on stolen content exist.
While we publish on the internet, which we have every right to, and which we have built amazing communities from, we cannot be safe or protected from AI bots scraping every word.
Ultimately, the aforementioned tech/cybersecurity pro said that all we can do is what makes us personally feel better. Right now it is not affecting us directly/individually, and we can’t prove otherwise.
I’m going to continue to fully abstain from any use of ChatGPT or other generative AI software, and encourage others in my life to do the same.
Lastly, I’ve seen suggestions that Google may be scraping private docs for ai learning. I’ve been assured that this is incredibly unlikely, given the potential for them to then accidentally reveal highly sensitive confidential information that could land them in a world of trouble. There is enough already on the internet; scraping docs would be way riskier than it’s worth.
2 notes · View notes
darrylhudson86 · 10 months
Text
SEO Plain and Simple 🌞
🎶 SEO Symphony - Elevate Your Digital Presence! ✨
1/ Ever wondered how Google knows exactly what you're looking for? It's not mind-reading, it's SEO! 🕵️‍♂️ Search Engine Optimization is like the digital wizardry that ensures your content shines like a beacon in the vast online ocean. Let's dive into the SEO wonderland! 🌐✨
2/ 🧠 Fun Fact: SEO isn't just about keywords; it's a strategic dance of algorithms! Google's bots are like super-smart detectives crawling through the web, indexing pages, and ranking them based on relevance. It's a quest for the most accurate search results! 🕵️‍♀️📊
3/ Meta Tags are the unsung heroes of SEO. Think of them as your content's backstage pass. 🎤 Crafting catchy titles and meta descriptions isn't just for show; it's the key to grabbing attention in the search results! 🌟✍️
4/ 🔄 The SEO game is a dynamic tango. Regularly updating your content tells search engines that you're the cool kid on the block. Fresh, relevant content is like a VIP pass to higher rankings! 🚀📅
5/ Backlinks are the OG influencers of SEO. Imagine each link as a vote of confidence for your content. 🗳️ Quality over quantity is the mantra here. The more reputable sites vouch for you, the higher you climb in the SEO hierarchy! 📈🏆
6/ Page Speed is the also important for user experience and SEO. Slow-loading pages are like a suspense movie with a never-ending plot twist – frustrating! ⏳⚡ Optimize your speed, and you'll make users happy to browse your website! 🌞
7/ Mobile-First Indexing is the superhero transformation of SEO. With mobile usage soaring, Google prioritizes mobile-friendly sites. 🦸‍♂️📱 If your site isn't mobile-ready, you're missing out on a heroic opportunity!
Unlock the SEO secrets and skyrocket your digital presence! 🚀 Dive into our step-by-step guide to SEO success! Your journey to the top starts here:
#seo #seomarketing #seosecrets #seoguide #seotips #seotactics #seotutorial #seocourse #seotraining #seostrategy #seomarketing #onpageseo #offpageseo #backlinks #freetraffic #organictraffic #websitetraffic #boostyourtraffic #searchengineoptimization #onpageoptimization #offpageoptimization #affiliatemarketing #digitalmarketing #internetmarketing #makemoneyonline #earnmoneyonline #onlinemarketing #onlinebusiness #homebusiness #workfromhome #passiveincome #digitalmarketer
5 notes · View notes
butters-flower-mom · 8 months
Text
Are bots sending links on anon now? I got an anon ask with nothing but a link to something on Google Play and there was no way in hell I was gonna click it to see what it was. I kept getting an error when trying to report the ask so I just blocked the sender. Sorry if you were an actual person just trying to show me something harmless, anon, but sending a stranger a random link with no context isn't a smart idea.
3 notes · View notes
azspot · 1 year
Quote
Social networks like Facebook and Twitter — the heart of the modern web — prohibit scraping, which means most data sets used to train AI cannot access them. Tech giants like Facebook and Google that are sitting on mammoth troves of conversational data have not been clear about how personal user information may be used to train AI models that are used internally or sold as products.
See the websites that make AI bots like ChatGPT sound so smart
12 notes · View notes
sholangagaga · 2 years
Note
Gregory is Michael Afton Theory
Bonus (4/4)
Gregory being Michael allows Mike to have a better dad.
The save stations have Helpy on them, and Michael knows Helpy from Pizza Sim.
If Gregory is C.C, shouldn’t it show the Fredbear plush?
Michael has a chance to beat PeePaw up with a crowbar.
After working to stop his father for so long, Michael deserves this.
Gregory doesn’t have to be an animatronic for this theory!
Michael could have been reincarnated or something similar.
If not an animatronic, Michael gets his organs back. He could eat again.
Link to Google Doc with entire theory in once place & Game Theory's Gregory theories: https://docs.google.com/document/d/1c2jvwePYHwvmc1DoaqY9qPpczOL9njJL9jiH1tN0qtw/edit?usp=sharing
I didn't reply to the first pieces since you linked the doc here BUT full analysis under the cut
So I'm gonna start by saying this is one of the strangest (in a good way) and most interesting SBB theories I've ever been prosed. Honestly if I was someone who wasn't so ingrained in the lore due to my AUs, I would probably believe this as near infallible. It's THAT well thought out (And this is coming from someone who doesn't really Like Game Theory cause they've been known to kinda pull rabbits from hats, if you know what I mean)
Now I will be overly analytical and pick this apart as my neurons fire a mile a minute
Physical Appearance
While I can see the resemblance, half of the FNAF canon roster is either brown haired or hinted to have dark hair (excluding some of the blonde Aftons and also William, depending on who you ask)
Most of the canon characters are also big fans of Foxy (See Phone Guy for example; I'm pretty sure Foxy is/was Scott's favorite back in the original games. He was also terrified of Bonnie funny enough)
Personality
The Gregory being agitated thing is true, but also, can you REALLY blame him? His entire situation is pretty stressful, I think I'd be a real asshole in that premise as well. Michael's sarcasm has been a known point for sure, the dude canonically loves soap operas he's probably the snarkiest dude around (and I love him for it)
All of this tick marks tbh are just sorta occupational. Gregory not trusting anyone makes sense because he's an orphan who apparently was treated poorly in the streets. OF COURSE he doesn't really like adults, least of all adults in uniform. Michael is paranoid at work because he works in a setting where the robots are trying to kill him
The destroying the bots thing is, again, very understandable. Gregory spent the night trying to survive, he could care LESS how the robots feel. He wants them to stop trying to kill him! Same with Michael, they've fucked him over and its not like theyre living things. Getting rid of them wouldnt be much of a moral dilemma
Storyline
Gregory knows that Freddy has a stomach hatch because its common knowledge. His stomach hatch is for birthday cakes, which means it's perfectly logical for him to know it exists. In the same vein, he would know hiding inside it would be dangerous. Its meant for cake, not kids! Gregory was bold to even attempt it imo but he's a very reckless kid
The family one is SO GOOD but also, unfortunately, can easily be debunked as a basic voiceline for lost children. It's a super mall, children probably get lost all the time. The best way to lure a child (according to William) is to mention their family/things they like. It worked for him, it would work for fan favorite robots
I think thee reason why Gregory seems so adult is because, well, it IS a scripted game. But everything Gregory does is basically spoonfed to him by Freddy, so I don't think it's more so that he's overly smart or working on prior knowledge, its just hes good at context clues and following instructions
I have my own gripes about the Parts and Service system, but for game sake Gregory only succeeds cause failing means death. But yeah all my homies hate P&S
In universe, it would make sense that Gregory hates the Franchis cause....Fazbear Ent fucked up ALOT for a VERY LONG TIME. Most people still hate them, as evident from when HandUnit/Dread Unit just sorta always mentions "forget the past us" and shit like that. It's like being in our universe and hating Amazon for Bezos. Same concept, totally understandable.
The van thing is just funny, I love that one. Little baby driving heehee
My biggest problem with the theory is the gaps. While we don't know FOR SURE when everything takes place, the general consensus is FNAF 3 takes place in 2020 (and if you're one of the Michael is the Fnaf 3 protag believers, that's pretty significant) and SUPPOSEDLY, Pizzeria Simulator most likely takes place between 2023-onward, with Security Breach really close on the heels of it since It's built over the Pizza Place where Henry, William, and Michael all initially die. If Gregory is as old as most fans assume/agree (11-9) then there's no way he can be a reincarnation of Michael cause Michael didn't actually die/lost his remnant until Pizzeria Simulator, and then SB takes place less than 5 or so years after that (According to most theories)
I think its way more plausible that Gregory is a reincarnation of Evan/Crying Child and FREDDY is Michael, because not only does that give us a way better theory premise but it also closes out the Afton family storyline, with the two sons teaming up against their father. Gregory being different personality wise to Evan/the Crying child could easily be explained by Evan getting corrupted by Cassidy's negative remnant before she forced him out during the events of UCN/FNAF3. Could also just be the Afton sass if you wanted to be really dig into it. I could probably go on forever about that theory since I've been thinking about it for a while now
BUT in conclusion while I really like this theory, there's too many holes to really convince me to latch to it like I do other theories. Then again, it IS just a theory. A game theory
14 notes · View notes
mariacallous · 1 year
Text
Lena Anderson isn’t a soccer fan, but she does spend a lot of time ferrying her kids between soccer practices and competitive games.
“I may not pull out a foam finger and painted face, but soccer does have a place in my life,” says the soccer mom—who also happens to be completely made up. Anderson is a fictional personality played by artificial intelligence software like that powering ChatGPT.
Anderson doesn’t let her imaginary status get in the way of her opinions, though, and comes complete with a detailed backstory. In a wide-ranging conversation with a human interlocutor, the bot says that it has a 7-year-old son who is a fan of the New England Revolution and loves going to home games at Gillette Stadium in Massachusetts. Anderson claims to think the sport is a wonderful way for kids to stay active and make new friends.
In another conversation, two more AI characters, Jason Smith and Ashley Thompson, talk to one another about ways that Major League Soccer (MLS) might reach new audiences. Smith suggests a mobile app with an augmented reality feature showing different views of games. Thompson adds that the app could include “gamification” that lets players earn points as they watch.
The three bots are among scores of AI characters that have been developed by Fantasy, a New York company that helps businesses such as LG, Ford, Spotify, and Google dream up and test new product ideas. Fantasy calls its bots synthetic humans and says they can help clients learn about audiences, think through product concepts, and even generate new ideas, like the soccer app.
"The technology is truly incredible," says Cole Sletten, VP of digital experience at the MLS. “We’re already seeing huge value and this is just the beginning.”
Fantasy uses the kind of machine learning technology that powers chatbots like OpenAI’s ChatGPT and Google’s Bard to create its synthetic humans. The company gives each agent dozens of characteristics drawn from ethnographic research on real people, feeding them into commercial large language models like OpenAI’s GPT and Anthropic’s Claude. Its agents can also be set up to have knowledge of existing product lines or businesses, so they can converse about a client’s offerings.
Fantasy then creates focus groups of both synthetic humans and real people. The participants are given a topic or a product idea to discuss, and Fantasy and its client watch the chatter. BP, an oil and gas company, asked a swarm of 50 of Fantasy’s synthetic humans to discuss ideas for smart city projects. “We've gotten a really good trove of ideas,” says Roger Rohatgi, BP’s global head of design. “Whereas a human may get tired of answering questions or not want to answer that many ways, a synthetic human can keep going,” he says.
Peter Smart, chief experience officer at Fantasy, says that synthetic humans have produced novel ideas for clients, and prompted real humans included in their conversations to be more creative. “It is fascinating to see novelty—genuine novelty—come out of both sides of that equation—it’s incredibly interesting,” he says.
Large language models are proving remarkably good at mirroring human behavior. Their algorithms are trained on huge amounts of text slurped from books, articles, websites like Reddit, and other sources—giving them the ability to mimic many kinds of social interaction.
When these bots adopt human personas, things can get weird.
Experts warn that anthropomorphizing AI is both potentially powerful and problematic, but that hasn’t stopped companies from trying it. Character.AI, for instance, lets users build chatbots that assume the personalities of real or imaginary individuals. The company has reportedly sought funding that would value it at around $5 billion.
The way language models seem to reflect human behavior has also caught the eye of some academics. Economist John Horton of MIT, for instance, sees potential in using these simulated humans—which he dubs Homo silicus—to simulate market behavior.
You don’t have to be an MIT professor or a multinational company to get a collection of chatbots talking amongst themselves. For the past few days, WIRED has been running a simulated society of 25 AI agents go about their daily lives in Smallville, a village with amenities including a college, stores, and a park. The characters’ chat with one another and move around a map that looks a lot like the game Stardew Valley. The characters in the WIRED sim include Jennifer Moore, a 68-year-old watercolor painter who putters around the house most days; Mei Lin, a professor who can often be found helping her kids with their homework; and Tom Moreno, a cantankerous shopkeeper.
The characters in this simulated world are powered by OpenAI’s GPT-4 language model, but the software needed to create and maintain them was open sourced by a team at Stanford University. The research shows how language models can be used to produce some fascinating and realistic, if rather simplistic, social behavior. It was fun to see them start talking to customers, taking naps, and in one case decide to start a podcast.
Large language models “have learned a heck of a lot about human behavior” from their copious training data, says Michael Bernstein, an associate professor at Stanford University who led the development of Smallville. He hopes that language-model-powered agents will be able to autonomously test software that taps into social connections before real humans use them. He says there has also been plenty of interest in the project from videogame developers, too.
The Stanford software includes a way for the chatbot-powered characters to remember their personalities, what they have been up to, and to reflect upon what to do next. “We started building a reflection architecture where, at regular intervals, the agents would sort of draw up some of their more important memories, and ask themselves questions about them,” Bernstein says. “You do this a bunch of times and you kind of build up this tree of higher-and-higher-level reflections.”
Anyone hoping to use AI to model real humans, Bernstein says, should remember to question how faithfully language models actually mirror real behavior. Characters generated this way are not as complex or intelligent as real people and may tend to be more stereotypical and less varied than information sampled from real populations. How to make the models reflect reality more faithfully is “still an open research question,” he says.
Smallville is still fascinating and charming to observe. In one instance, described in the researchers’ paper on the project, the experimenters informed one character that it should throw a Valentine’s Day party. The team then watched as the agents autonomously spread invitations, asked each other out on dates to the party, and planned to show up together at the right time.
WIRED was sadly unable to re-create this delightful phenomenon with its own minions, but they managed to keep busy anyway. Be warned, however, running an instance of Smallville eats up API credits for access to OpenAI's GPT-4 at an alarming rate. Bernstein says running the sim for a day or more costs upwards of a thousand dollars. Just like real humans, it seems, synthetic ones don’t work for free.
2 notes · View notes