Tumgik
#regulate AI
revengeofreaper32 · 4 months
Text
Please everyone! Read the link and its comments!
@ask-fat-eqg @askdrakgo @yuurei20 @aynare-emanahitu-blog @ray-norr @biggwomen @nekocrispy @neromardorm @pizzaplexs-fattest-securityguard @lonely-lorddominator @ask-magical-total-drama @ask-liam-and-co @thefrogman @pleatedjeans @theclassyissue @terristre @textsfromsuperheroes @raven-at-the-writing-desk @spongebobssquarepants
19 notes · View notes
sixstringphonic · 1 year
Text
“A recent Goldman Sachs study found that generative AI tools could, in fact, impact 300 million full-time jobs worldwide, which could lead to a ‘significant disruption’ in the job market.”
“Insider talked to experts and conducted research to compile a list of jobs that are at highest-risk for replacement by AI.”
Tech jobs (Coders, computer programmers, software engineers, data analysts)
Media jobs (advertising, content creation, technical writing, journalism)
Legal industry jobs (paralegals, legal assistants)
Market research analysts
Teachers
Finance jobs (Financial analysts, personal financial advisors)
Traders (stock markets)
Graphic designers
Accountants
Customer service agents
"’We have to think about these things as productivity enhancing tools, as opposed to complete replacements,’ Anu Madgavkar, a partner at the McKinsey Global Institute, said.”
What will be eliminated from all of these industries is the ENTRY LEVEL JOB.  You know, the jobs where newcomers gain valuable real-world experience and build their resumes?  The jobs where you’re supposed to get your 1-2 years of experience before moving up to the big leagues (which remain inaccessible to applicants without the necessary experience, which they can no longer get, because so-called “low level” tasks will be completed by AI).
There’s more...
Wendy’s to test AI chatbot that takes your drive-thru order
“Wendy’s is not entirely a pioneer in this arena. Last year, McDonald’s opened a fully automated restaurant in Fort Worth, Texas, and deployed more AI-operated drive-thrus around the country.”
BT to cut 55,000 jobs with up to a fifth replaced by AI
“Chief executive Philip Jansen said ‘generative AI’ tools such as ChatGPT - which can write essays, scripts, poems, and solve computer coding in a human-like way - ‘gives us confidence we can go even further’.”
Why promoting AI is actually hurting accounting
“Accounting firms have bought into the AI hype and slowed their investment in personnel, believing they can rely more on machines and less on people.“
Will AI Replace Software Engineers?
“The truth is that AI is unlikely to replace high-value software engineers who build complex and innovative software. However, it could replace some low-value developers who build simple and repetitive software.”
75 notes · View notes
millionmovieproject · 14 days
Text
S2E1 of OG MacGyver: Mac is sent to stress test an AI security system hidden in a mountain. It doesn't like him and tries to kill him, and keeps learning how to stop its own shut down, until Mac makes it electrocute its self. In the end, he tells its creator not to give computers human jobs, and she kisses him because he's right.
5 notes · View notes
woodpengu · 1 month
Text
AI has it's place in content rendering...
... but not the misuse of the recent times.
AI is not an artist. It's an engine for meeting criteria given to it by a prompt. Which is quite handy for a game dev working on a project with an immediate deadline that requires 800 pieces of content (like backgrounds, settings, photographs/paintings, etc) or a writer needing a bunch of random titles or names to fill up space for a scene due next week. Just as a couple examples.
That being said, AI designed for content rendering should NOT use existing, private domain (copywritten and/or TM) content as it's foundation. An artist taking inspiration from another artist's work isn't the same as a program scanning existing work to buffet-pick details and mash them together... not quite. Because AI doesn't "take inspiration" through observation the way the human brain can. We have a lot of imagination, nuance, and inherent cringe potential... meanwhile the computer is merely following highly-detailed instructions and reorienting its results based on feedback. Using existing artwork as a foundation for "something new" is not "taking inspiration"... it [to me] seems like vandalism and stealing in a fell swoop.
Because AI has its place [in content rendering], I feel it needs to be regulated rather than outright banned. Which is how I feel about a lot of subjects or ideas. Legislation to declare that AI cannot hold copyright of its creations is one big step, so the momentum of this intention can and should be carried to protect artists [of all kinds] further. Like clear examples of where AI is a handy tool versus a dastardly abuse of privileges. Or denying AI the kind of entity status that could turn it into a greed-feeder, instead funding the human charged with maintaining the computers strictly for the job of maintenance (not the content spewed by their AI monstrosity). But I ramble. I'm mostly an idea person, not an engineer of executing the ideas.
Anywhoodles... point being:
AI using any content to meet prompt criteria: major boo, thievery, violation of an artist's rights to their own work, "Buttocks, meet mine boot".
AI using public domain, royalty free, or "I paid that artist under a contract we agreed to" to go nuts: inspiring, motivational, get me the popcorn for this free entertainment, this is about to get wild.
2 notes · View notes
merlina87 · 2 years
Text
Sign this if you live in the EU please ♡
5 notes · View notes
cozylittleartblog · 7 months
Text
cant tell you how bad it feels to constantly tell other artists to come to tumblr, because its the last good website that isn't fucked up by spoonfeeding algorithms and AI bullshit and isn't based around meaningless likes
just to watch that all fall apart in the last year or so and especially the last two weeks
there's nowhere good to go anymore for artists.
edit - a lot of people are saying the tags are important so actually, you'll look at my tags.
#please dont delete your accounts because of the AI crap. your art deserves more than being lost like that #if you have a good PC please glaze or nightshade it. if you dont or it doesnt work with your style (like mine) please start watermarking #use a plain-ish font. make it your username. if people can't google what your watermark says and find ur account its not a good watermark #it needs to be central in the image - NOT on the canvas edges - and put it in multiple places if you are compelled #please dont stop posting your art because of this shit. we just have to hope regulations will come slamming down on these shitheads#in the next year or two and you want to have accounts to come back to. the world Needs real art #if we all leave that just makes more room for these scam artists to fill in with their soulless recycled garbage #improvise adapt overcome. it sucks but it is what it is for the moment. safeguard yourself as best you can without making #years of art from thousands of artists lost media. the digital world and art is too temporary to hastily click a Delete button out of spite
23K notes · View notes
minittwastaken · 2 months
Text
One of the things that really sucks about the whole generative AI situation is how much good this tech could do if it was used with consent, like for example AI voices could consenually be used for someone who lost their voice due to a medical condition.
0 notes
hughmanrights · 6 months
Text
instagram
1 note · View note
aquitainequeen · 1 year
Text
Tumblr media Tumblr media
Alarm bells being rung by Maureen Johnson on AI and the Big Publishers
27K notes · View notes
inner-space-oddity · 1 year
Text
I’m so mad my mom just pointed out to me that AI will probably be used to replace editor jobs which, by the way, is the field I am going into
I guess I’ll die?????
1 note · View note
lesbinewren · 4 months
Text
not surprising but still wild that google added an ai feature to the very top of every single searches that gives incorrect information more often than not. like a beloved restaurant that randomly starts topping all of their food with cyanide one day for no reason. no you’re not allowed to ask them to not top your dish with cyanide this is the future of food
167 notes · View notes
snuffysbox · 1 year
Text
anyway I think I've made my stance on AI art clear, but if anyone wants a complete picture of what I think I'm always gonna refer to this video by Steven Zapata
youtube
319 notes · View notes
sixstringphonic · 10 months
Text
OpenAI Fears Get Brushed Aside
(A follow-up to this story from May 16th 2023.) Big Tech dismissed board’s worries, along with the idea profit wouldn’t rule usage. (Reported by Brian Merchant, The Los Angeles Times, 11/21/23) It’s not every day that the most talked-about company in the world sets itself on fire. Yet that seems to be what happened Friday, when OpenAI’s board announced that it had terminated its chief executive, Sam Altman, because he had not been “consistently candid in his communications with the board.” In corporate-speak, those are fighting words about as barbed as they come: They insinuated that Altman had been lying. The sacking set in motion a dizzying sequence of events that kept the tech industry glued to its social feeds all weekend: First, it wiped $48 billion off the valuation of Microsoft, OpenAI’s biggest partner. Speculation about malfeasance swirled, but employees, Silicon Valley stalwarts and investors rallied around Altman, and the next day talks were being held to bring him back. Instead of some fiery scandal, reporting indicated that this was at core a dispute over whether Altman was building and selling AI responsibly. By Monday, talks had failed, a majority of OpenAI employees were threatening to resign, and Altman announced he was joining Microsoft. All the while, something else went up in flames: the fiction that anything other than the profit motive is going to govern how AI gets developed and deployed. Concerns about “AI safety” are going to be steamrolled by the tech giants itching to tap in to a new revenue stream every time.
It’s hard to overstate how wild this whole saga is. In a year when artificial intelligence has towered over the business world, OpenAI, with its ubiquitous ChatGPT and Dall-E products, has been the center of the universe. And Altman was its world-beating spokesman. In fact, he’s been the most prominent spokesperson for AI, period. For a highflying company’s own board to dump a CEO of such stature on a random Friday, with no warning or previous sign that anything serious was amiss — Altman had just taken center stage to announce the launch of OpenAI’s app store in a much-watched conference — is almost unheard of. (Many have compared the events to Apple’s famous 1985 canning of Steve Jobs, but even that was after the Lisa and the Macintosh failed to live up to sales expectations, not, like, during the peak success of the Apple II.)
So what on earth is going on?
Well, the first thing that’s important to know is that OpenAI’s board is, by design, differently constituted than that of most corporations — it’s a nonprofit organization structured to safeguard the development of AI as opposed to maximizing profitability. Most boards are tasked with ensuring their CEOs are best serving the financial interests of the company; OpenAI’s board is tasked with ensuring their CEO is not being reckless with the development of artificial intelligence and is acting in the best interests of “humanity.” This nonprofit board controls the for-profit company OpenAI.
Got it?
As Jeremy Khan put it at Fortune, “OpenAI’s structure was designed to enable OpenAI to raise the tens or even hundreds of billions of dollars it would need to succeed in its mission of building artificial general intelligence (AGI) … while at the same time preventing capitalist forces, and in particular a single tech giant, from controlling AGI.” And yet, Khan notes, as soon as Altman inked a $1-billion deal with Microsoft in 2019, “the structure was basically a time bomb.” The ticking got louder when Microsoft sunk $10 billion more into OpenAI in January of this year.
We still don’t know what exactly the board meant by saying Altman wasn’t “consistently candid in his communications.” But the reporting has focused on the growing schism between the science arm of the company, led by co-founder, chief scientist and board member Ilya Sutskever, and the commercial arm, led by Altman. We do know that Altman has been in expansion mode lately, seeking billions in new investment from Middle Eastern sovereign wealth funds to start a chip company to rival AI chipmaker Nvidia, and a billion more from Softbank for a venture with former Apple design chief Jony Ive to develop AI-focused hardware. And that’s on top of launching the aforementioned OpenAI app store to third party developers, which would allow anyone to build custom AIs and sell them on the company’s marketplace.
The working narrative now seems to be that Altman’s expansionist mind-set and his drive to commercialize AI — and perhaps there’s more we don’t know yet on this score — clashed with the Sutskever faction, who had become concerned that the company they co-founded was moving too fast. At least two of the board’s members are aligned with the so-called effective altruism movement, which sees AI as a potentially catastrophic force that could destroy humanity.
The board decided that Altman’s behavior violated the board’s mandate. But they also (somehow, wildly) seem to have failed to anticipate how much blowback they would get for firing Altman. And that blowback has come at gale-force strength; OpenAI employees and Silicon Valley power players such as Airbnb’s Brian Chesky and Eric Schmidt spent the weekend “I am Spartacus”-ing Altman. It’s not hard to see why. OpenAI had been in talks to sell shares to investors at an $86-billion valuation. Microsoft, which has invested more than $11 billion in OpenAI and now uses OpenAI’s tech on its platforms, was apparently informed of the board’s decision to fire Altman five minutes before the wider world. Its leadership was furious and seemingly led the effort to have Altman reinstated. But beyond all that lurked the question of whether there should really be any safeguards to the AI development model favored by Silicon Valley’s prime movers; whether a board should be able to remove a founder they believe is not acting in the interest of humanity — which, again, is their stated mission — or whether it should seek relentless expansion and scale.
See, even though the OpenAI board has quickly become the de facto villain in this story, as the venture capital analyst Eric Newcomer pointed out, we should maybe take its decision seriously. Firing Altman was probably not a call they made lightly, and just because they’re scrambling now because it turns out that call was an existential financial threat to the company does not mean their concerns were baseless. Far from it.
In fact, however this plays out, it has already succeeded in underlining how aggressively Altman has been pursuing business interests. For most tech titans, this would be a “well, duh” situation, but Altman has fastidiously cultivated an aura of a burdened guru warning the world of great disruptive changes. Recall those sheepdog eyes in the congressional hearings a few months back where he begged for the industry to be regulated, lest it become too powerful? Altman’s whole shtick is that he’s a weary messenger seeking to prepare the ground for responsible uses of AI that benefit humanity — yet he’s circling the globe lining up investors wherever he can, doing all he seemingly can to capitalize on this moment of intense AI interest.
To those who’ve been watching closely, this has always been something of an act — weeks after those hearings, after all, Altman fought real-world regulations that the European Union was seeking to impose on AI deployment. And we forget that OpenAI was originally founded as a nonprofit that claimed to be bent on operating with the utmost transparency — before Altman steered it into a for-profit company that keeps its models secret. Now, I don’t believe for a second that AI is on the cusp of becoming powerful enough to destroy mankind — I think that’s some in Silicon Valley (including OpenAI’s new interim CEO, Emmett Shear) getting carried away with a science fictional sense of self-importance, and a uniquely canny marketing tactic — but I do think there is a litany of harms and dangers that can be caused by AI in the shorter term. And AI safety concerns getting so thoroughly rolled at the snap of the Valley’s fingers is not something to cheer.
You’d like to believe that executives at AI-building companies who think there’s significant risk of global catastrophe here couldn’t be sidelined simply because Microsoft lost some stock value. But that’s where we are.
Sam Altman is first and foremost a pitchman for the year’s biggest tech products. No one’s quite sure how useful or interesting most of those products will be in the long run, and they’re not making a lot of money at the moment — so most of the value is bound up in the pitchman himself. Investors, OpenAI employees and partners such as Microsoft need Altman traveling the world telling everyone how AI is going to eclipse human intelligence any day now much more than it needs, say, a high-functioning chatbot.
Which is why, more than anything, this winds up being a coup for Microsoft. Now it has got Altman in-house, where he can cheerlead for AI and make deals to his heart’s content. They still have OpenAI’s tech licensed, and OpenAI will need Microsoft more than ever. Now, it may yet turn out to be that this was nothing but a power struggle among board members, and it was a coup that went wrong. But if it turns out that the board had real worries and articulated them to Altman to no avail, no matter how you feel about the AI safety issue, we should be concerned about this outcome: a further consolidation of power of one of the biggest tech companies and less accountability for the product than ever.
If anyone still believes a company can steward the development of a product like AI without taking marching orders from Big Tech, I hope they’re disabused of this fiction by the Altman debacle. The reality is, no matter whatever other input may be offered to the company behind ChatGPT, the output will be the same: Money talks.
2 notes · View notes
millionmovieproject · 6 months
Text
Tumblr media
10 notes · View notes
amalasdraws · 2 years
Photo
Tumblr media
Ai generated images need to be controlled and regulated!
The music industry has shown that this is possible and that music, musicians and copyright can be protected! We need this for visual art too!  The database scraps all art they can find on the internet! Artists didn’t give their permission and there is no way to opt out! Even medical reports and photos that are classified are being fed into those ai machines! It’s unethical and criminal and they weasel they way around legal terms!
Stand with artists! Support them! Regulate AI!
Edit: no I don’t want the art world to be like the music industry! This was just an example to show that with support and lobby some kind of moderation is possible! But no one fucking cares for artists!
1K notes · View notes
Text
PSA TO AUTHORS AND READERS: AI generated “books”
Link
43 notes · View notes