#Google Search Engine Data Scraping
Explore tagged Tumblr posts
Text

Discover how search engine scraping, specifically Google search results data scraping, can provide valuable insights for SEO, market research, and competitive analysis. Learn the techniques and tools to extract real-time data from Google efficiently while navigating legal and ethical considerations to boost your digital strategy.
#Search Engine Scraping#Scrape Google Search Results#Search Engine Scraping Services#Google Search Results data#Google Search Engine Data Scraping
0 notes
Text
Google is now the only search engine that can surface results from Reddit, making one of the web’s most valuable repositories of user generated content exclusive to the internet’s already dominant search engine. If you use Bing, DuckDuckGo, Mojeek, Qwant or any other alternative search engine that doesn’t rely on Google’s indexing and search Reddit by using “site:reddit.com,” you will not see any results from the last week. DuckDuckGo is currently turning up seven links when searching Reddit, but provides no data on where the links go or why, instead only saying that “We would like to show you a description here but the site won't allow us.” Older results will still show up, but these search engines are no longer able to “crawl” Reddit, meaning that Google is the only search engine that will turn up results from Reddit going forward. Searching for Reddit still works on Kagi, an independent, paid search engine that buys part of its search index from Google. The news shows how Google’s near monopoly on search is now actively hindering other companies’ ability to compete at a time when Google is facing increasing criticism over the quality of its search results. And while neither Reddit or Google responded to a request for comment, it appears that the exclusion of other search engines is the result of a multi-million dollar deal that gives Google the right to scrape Reddit for data to train its AI products.
July 24 2024
2K notes
·
View notes
Text
Google is now the only search engine that can surface results from Reddit, making one of the web’s most valuable repositories of user generated content exclusive to the internet’s already dominant search engine. "...while neither Reddit or Google responded to a request for comment, it appears that the exclusion of other search engines is the result of a multi-million dollar deal that gives Google the right to scrape Reddit for data to train its AI products."
829 notes
·
View notes
Text
Only Google or engines that use search data from Google can index Reddit seemingly after Google paid for the ability to index it for AI. Effectively setting precedent that should be the grounds for serious antitrust action
61 notes
·
View notes
Note
Don't know if you are aware, but Tumblr just partnered with Midjourney and Open Ai. You can opt out of having your data scraped via a toggle under 'visibility'. Tumblr is being Tumblr, so if you cannot see the toggle, go to the Staff post about the changes and use the link they provide. It will show up then. Best of luck.
Hey! I toggled that myself as soon as I saw it was a thing, but I ought to answer this ask so people can see it as well if they haven't already. On one hand, if you post your art on the internet at all it likely has already been scraped, but on the other hand I find it meaningful to have some sort of agreed upon protection as well. The more we normalize models needing permission to scrape data, the closer we get to actual proper regulation, if not suppression, so tumblr doing this is... a very small step in the right direction.
To be honest, I kind of wish this conversation was had all the way back when google images was a thing, because while you can opt-out of being found on search engines, I feel like there was never enough pressure put on making google images an opt-in sort of thing instead of the other way around. Obviously it's more profitable for both Google and now AI companies to just go ahead and Take without any permission, but I guess it took a monster like generative AI to get people to start acting against this.
126 notes
·
View notes
Text
Talking about Art reference and some source.
In this age of AI Art and corporate Art industry, I think it is more important than ever to cite your inspiration and reference, to both works and creators, than to call Ai art soul less or not art.
Before AI Art, Pinterest is also one of the greatest "art/reference middleman" that hoard all reference and disconnect artists from the useful source to learn instead of cherry-picking information that might be wrong or lacking context.
Before ChatBots scraping data and spew out recipe telling people to add bleach into egg mix, we have websites choke full of stolen recipes that came with pointless made up life story to add as many unnecessary keywords to Search Engine Optimization, written by unpaid interns.
What comes to mind is the "wolf skull" that is actually a badger skull and used wildly as a tattoo reference.
Or an art student who study horse muscles, and they mistakenly give the horse human muscle somewhere, also wildly used as a reference.
And the worst of all, tumblr, that feel like the last big website that allow me to curate my own user experience has notoriously awful search function (still not as bad as twitter). I couldn't to find the source for both incident, even if I am mostly sure I reblogged it.
Also, beside the inaccuracy, I want people to think of it as less, "I don't want to use AI Art (or use it as a reference) because they are worse." and more of a:
"We are losing our respect and connection to people who search and publish information. Because of all of these middlemen, Google, Pinterest, and now AI tools, who love to obfuscate information source they took from someone's hard work."
"We are losing out on chances to connect to each other and build community based on shared goals."
"We are losing out developing respect of knowledge, critical thinking skills, and curiosity because we are under a false premise that all knowledge is easily available/easily created."
"We are losing our chance to decided to be someone who provide information and teaching instead of consuming and learning all the time. Unlike what the internet search engine and Chatbot, want to convince you, knowledge is hard-earned and not always available."
This is some art source I used:
Eh, this is a coral identification guild I used, because why not.
This one is from Australia, that I did not use, but truly appreciate how through it is.
Made into a very good, familiar website format:
I beg everyone to make The Internet a good place to share information and argue with each other in good faith again.
4 notes
·
View notes
Text
"we'll all have flying cars in the future" bro we cannot even do a web search anymore
here's a chunk of it since it's subscribe walled
"If you use Bing, DuckDuckGo, Mojeek, Qwant or any other alternative search engine that doesn’t rely on Google’s indexing and search Reddit by using “site:reddit.com,” you will not see any results from the last week. DuckDuckGo is currently turning up seven links when searching Reddit, but provides no data on where the links go or why, instead only saying that “We would like to show you a description here but the site won't allow us.” Older results will still show up, but these search engines are no longer able to “crawl” Reddit, meaning that Google is the only search engine that will turn up results from Reddit going forward. Searching for Reddit still works on Kagi, an independent, paid search engine that buys part of its search index from Google.
The news shows how Google’s near monopoly on search is now actively hindering other companies’ ability to compete at a time when Google is facing increasing criticism over the quality of its search results. This exclusion of other search engines also comes after Reddit locked down access to its site to stop companies from scraping it for AI training data, which at the moment only Google can do as a result of a multi-million dollar deal that gives Google the right to scrape Reddit for data to train its AI products.
“They’re [Reddit] killing everything for search but Google,” Colin Hayhurst, CEO of the search engine Mojeek told me on a call.
Hayhurst tried contacting Reddit via email when Mojeek noticed it was blocked from crawling the site in early June, but said he has not heard back."
#unclear if google can get in trouble for this under monopoly law#since it is reddit charging#so technically other engines could buy in#if they can afford it for 60mil lol#it still gives them monopoly power though so who knows#mp#tech stuff#i will say that free subscribing to 404 isn't bad#i turned off all email stuff and they haven't bugged me#and the articles are interesting#so it's fine#i hate that i have to though
13 notes
·
View notes
Text
i woke up feeling Nihilistic about Technology so now you must all suffer with me most people are probably not keeping up with what the tech companies are actually making, doing, demoing, with AI in the way i am. and that's okay you will not like what you hear most likely. i am also not any kind of technology professional. i just like technology. i just read about technology. there's sort of two things that are happening in tandem which is:
there is a race between some of the biggest ones (google, meta, openai, microsoft, etc. along with some not yet household name ones like perplexity and deepseek) to essentially Decide, make the tech, and Win at this technology. think of how Google has been the defacto ruler of the internet between the Search Engine that delivers web pages, and the Ad Engine that makes money for advertisers and google. they have all of the information and make the majority of the money. AI is the first technology in 20 years that has everyone scrambling to become the new Google of That.
ChatGPT, the thing we have access to right now, it is stupid sometimes. but the reason every single company is pushing this shit is because they want to be First to make a product that Works, and they also are rebuilding how we will interact with the internet from the ground up. the thing basically everyone wants is to control 'the window' as it were between You typing things into the computer, and the larger internet. in a real way, Google owns 'the window' in many meaningful (monetary) ways. the future that basically every company is working towards right now is a version of the the websites on the internet become more of a database; a collection of data that can be accessed by the AI model. every computer you use becomes the Search box on Google.com, but when you type things into it, it just finds information and spits it out in front of you. there is a future where 'the internet' is just an AI chat bot.
holding those two ideas at once (everyone wants to be the Google of AI, and also every single tech company wants us to look at the internet in a way they choose and have control over) THIS SUCKS. THIS SUCKS ASS.
THE THING THAT IS BEAUTIFUL ABOUT THE INTERNET IS THAT IT IS OPEN. you can, in almost every place in the world, build a stupid website and connect it to the internet and anyone can look at it. ANYONE. we have absolutely NOTHING ELSE as universal, as open, as this. every single tech company is trying to change this in a meaningful way. in the Worst version of this, the internet just looks like the ChatGPT page, because it scrapes data and regurgitates it back to you. instead of seeing the place where this data was written, formatted, presented, on its own website like god intended
the worst part is: despite the posts you see from almost everyone in our respective bubbles about how AI sucks, we won't use it, it's bad for the environment, etc. NORMAL PEOPLE are using this shit all of the time. they are fine that it occasionally is wrong. and also the models of the various Chatbot AIs is getting better everyday at not being wrong. for like the first time in like 20 years since google launched, there is a real threat that the place people go to search for things online is rapidly shifting somewhere else. because people are using this stuff. the loudest people against AI are currently a minority of loud voices. not only is this not going away, but it is happening. this is actually web 3.0. and it's going to be so shit
this is not to say you will not be able to go to tumblr.com. but it will take effort. browser applications are basically not profitable, just ask Mozilla. google has chrome, which makes money because it has you use Google and it tracks your data to sell you ads. safari doesn't make money, but apple Takes google's money to pay for maintaining it. most other browsers are just forked chromium.
in my opinion there will be one sad browser application for you to access real websites, it will eventually become unmaintained as people just go to the winner's AI chatbot app to access information online. 'websties' will become subculture; a group of hobbyists will maintain the thing that might let you access these things. normal people will move on from the idea of going to websites.
the future of the internet will be a sad, lonely place, where the sterile, commercially viable and advertiser friendly chatbot will tell you about whatever you type or say into the computer. it will encourage people to not make connections online, or even in their lives, because there will be a voice assistant they can talk with. one of the latest google demos, there is a person fixing their bicycle, having Gemini look thru the manual, tell them how to fix a certain part of the bike. Gemini calls a repair shop, and talks to the person on the other side. a lot of people covering this are like 'that future is extremely cool and interesting to me' and when i heard That that is when i know we have like. lost it.
for whatever reason, people want this kind of technology. and it makes me so sad.
4 notes
·
View notes
Text
In 2017, soon after Google researchers invented a new kind of neural network called a transformer, a young OpenAI engineer named Alec Radford began experimenting with it. What made the transformer architecture different from that of existing A.I. systems was that it could ingest and make connections among larger volumes of text, and Radford decided to train his model on a database of seven thousand unpublished English-language books—romance, adventure, speculative tales, the full range of human fantasy and invention. Then, instead of asking the network to translate text, as Google’s researchers had done, he prompted it to predict the most probable next word in a sentence.
The machine responded: one word, then another, and another—each new term inferred from the patterns buried in those seven thousand books. Radford hadn’t given it rules of grammar or a copy of Strunk and White. He had simply fed it stories. And, from them, the machine appeared to learn how to write on its own. It felt like a magic trick: Radford flipped the switch, and something came from nothing.
His experiments laid the groundwork for ChatGPT, released in 2022. Even now, long after that first jolt, text generation can still provoke a sense of uncanniness. Ask ChatGPT to tell a joke or write a screenplay, and what it returns—rarely good, but reliably recognizable—is a sort of statistical curve fit to the vast corpus it was trained on, every sentence containing traces of the human experience encoded in that data.
When I’m drafting an e-mail and type, “Hey, thanks so much for,” then pause, and the program suggests “taking,” then “the,” then “time,” I’ve become newly aware of which of my thoughts diverge from the pattern and which conform to it. My messages are now shadowed by the general imagination of others. Many of whom, it seems, want to thank someone for taking . . . the . . . time.
That Radford’s breakthrough happened at OpenAI was no accident. The organization had been founded, in 2015, as a nonprofit “Manhattan Project for A.I.,” with early funding from Elon Musk and leadership from Sam Altman, who soon became its public face. Through a partnership with Microsoft, Altman secured access to powerful computing infrastructures. But, by 2017, the lab was still searching for a signature achievement. On another track, OpenAI researchers were teaching a T-shaped virtual robot to backflip: the bot would attempt random movements, and human observers would vote on which resembled a flip. With each round of feedback, it improved—minimally, but measurably. The company also had a distinctive ethos. Its leaders spoke about the existential threat of artificial general intelligence—the moment, vaguely defined, when machines would surpass human intelligence—while pursuing it relentlessly. The idea seemed to be that A.I. was potentially so threatening that it was essential to build a good A.I. faster than anyone else could build a bad one.
Even Microsoft’s resources weren’t limitless; chips and processing power devoted to one project couldn’t be used for another. In the aftermath of Radford’s breakthrough, OpenAI’s leadership—especially the genial Altman and his co-founder and chief scientist, the faintly shamanistic Ilya Sutskever—made a series of pivotal decisions. They would concentrate on language models rather than, say, back-flipping robots. Since existing neural networks already seemed capable of extracting patterns from data, the team chose not to focus on network design but instead to amass as much training data as possible. They moved beyond Radford’s cache of unpublished books and into a morass of YouTube transcripts and message-board chatter—language scraped from the internet in a generalized trawl.
That approach to deep learning required more computing power, which meant more money, putting strain on the original nonprofit model. But it worked. GPT-2 was released in 2019, an epochal event in the A.I. world, followed by the more consumer-oriented ChatGPT in 2022, which made a similar impression on the general public. User numbers surged, as did a sense of mystical momentum. At an off-site retreat near Yosemite, Sutskever reportedly set fire to an effigy representing unaligned artificial intelligence; at another retreat, he led colleagues in a chant: “Feel the AGI. Feel the AGI.”
In the prickly “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI” (Penguin Press), Karen Hao tracks the fallout from the GPT breakthroughs across OpenAI’s rivals—Google, Meta, Anthropic, Baidu—and argues that each company, in its own way, mirrored Altman’s choices. The OpenAI model of scale at all costs became the industry’s default. Hao’s book is at once admirably detailed and one long pointed finger. “It was specifically OpenAI, with its billionaire origins, unique ideological bent, and Altman’s singular drive, network, and fundraising talent, that created a ripe combination for its particular vision to emerge and take over,” she writes. “Everything OpenAI did was the opposite of inevitable; the explosive global costs of its massive deep learning models, and the perilous race it sparked across the industry to scale such models to planetary limits, could only have ever arisen from the one place it actually did.” We have been, in other words, seduced—lulled by the spooky, high-minded rhetoric of existential risk. The story of A.I.’s evolution over the past decade, in Hao’s telling, is not really about the date of machine takeover or the degree of human control over the technology—the terms of the A.G.I. debate. Instead, it’s a corporate story about how we ended up with the version of A.I. we’ve got.
The “original sin” of this arm of technology, Hao writes, lay in a decision by a Dartmouth mathematician named John McCarthy, in 1955, to coin the phrase “artificial intelligence” in the first place. “The term lends itself to casual anthropomorphizing and breathless exaggerations about the technology’s capabilities,” she observes. As evidence, she points to Frank Rosenblatt, a Cornell professor who, in the late fifties, devised a system that could distinguish between cards with a small square on the right versus the left. Rosenblatt promoted it as brain-like—on its way to sentience and self-replication—and these claims were picked up and broadcast by the New York Times. But a broader cultural hesitancy about the technology’s implications meant that, once OpenAI made its breakthrough, Altman—its C.E.O.—came to be seen not only as a fiduciary steward but also as an ethical one. The background question that began to bubble up around the Valley, Keach Hagey writes in “The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future” (Norton), “first whispered, then murmured, then popping up in elaborate online essays from the company’s defectors: Can we trust this person to lead us to AGI?”
Within the world of tech founders, Altman might have seemed a pretty trustworthy candidate. He emerged from his twenties not just very influential and very rich (which isn’t unusual in Silicon Valley) but with his moral reputation basically intact (which is). Reared in a St. Louis suburb in a Reform Jewish household, the eldest of four children of a real-estate developer and a dermatologist, he had been identified early on as a kind of polymathic whiz kid at John Burroughs, a local prep school. “His personality kind of reminded me of Malcolm Gladwell,” the school’s head, Andy Abbott, tells Hagey. “He can talk about anything and it’s really interesting”—computers, politics, Faulkner, human rights.
Altman came out as gay at sixteen. At Stanford, according to Hagey, whose biography is more conventional than Hao’s but is quite compelling, he launched a student campaign in support of gay marriage and briefly entertained the possibility of taking it national. At an entrepreneur fair during his sophomore year, in 2005, the physically slight Altman stood on a table, flipped open his phone, declared that geolocation was the future, and invited anyone interested to join him. Soon, he dropped out and was running a company called Loopt. Abbott remembered the moment he heard that his former student was going into tech. “Oh, don’t go in that direction, Sam,” he said. “You’re so personable!”
Personability plays in Silicon Valley, too. Loopt was a modest success, but Altman made an impression. “He probably weighed a hundred and ten pounds soaking wet, and he’s surrounded by all these middle-aged adults that are just taking in his gospel,” an executive who encountered him at the time tells Hagey. “Anyone who came across him at the time wished they had some of what he had.”
By his late twenties, Altman had parlayed his Loopt millions into a series of successful startup investments and become the president of Y Combinator, the tech mega-incubator that has spun off dozens of billion-dollar companies. The role made him a first point of contact for Valley elders curious about what was coming next. From Jeff Bezos, he borrowed the habit of introducing two people by e-mail with a single question mark; from Paul Graham, Y Combinator’s co-founder, he absorbed the idea that startups should “add a zero”—always think bigger. It was as if he were running an internal algorithm trained on the corpus of Silicon Valley-founder lore, predicting the next most likely move.
To the elders he studied, Altman was something like the tech world’s radiant child, both its promise and its mascot. Peter Thiel once remarked that Altman was “just at the absolute epicenter, maybe not of Silicon Valley, but of the Silicon Valley zeitgeist.” (Altman is now married to a young Australian techie he met in Thiel’s hot tub.) Graham offered his own version: “You could parachute him into an island full of cannibals and come back in five years and he’d be king.” Some kind of generational arbitrage seemed to be under way. In 2008, Altman began attending Sun Valley Conference, an exclusive annual retreat for industry leaders, where he eventually became “close friends,” we learn, with Barry Diller and Diane von Furstenberg. Yet, in the mid-twenty-tens, he still shared an apartment with his two brothers. Hao records a later incident in which he offered ketamine to an employee he’d just fired. He was both the iconic child to the tech world’s adults and the iconic adult to its children.
An interesting artifact of the past decade in American life is that the apocalyptic sensibility that came to grip U.S. politics during the 2016 Presidential campaign—the conviction, on both right and left, that the existing structure simply could not hold—had already bubbled up in Silicon Valley a few years earlier. By 2015, Altman had been donating to Democratic candidates and seemed to have seriously considered a run for governor of California. But he also told Tad Friend, in a New Yorker Profile, that he was preparing for civilizational collapse and had stockpiled “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”
One view is that tech billionaires saw the brink early because they understood just how unequal—and therefore unstable—American society was becoming. But, inside the Valley, that anxiety often expressed itself in the language of existential risk. In particular, fears about runaway artificial intelligence surged around the time of the 2014 publication of “Superintelligence,” by the philosopher Nick Bostrom. According to Hao, Elon Musk became fixated on an A.I. technologist, Demis Hassabis—a co-founder of DeepMind, which had recently been acquired by Google—whom Musk reportedly viewed as a “supervillain.” That same year, at an M.I.T. symposium, Musk warned that experiments in artificial intelligence risked “summoning the demon.”
Altman had been itching for a bigger project. The next Memorial Day weekend, he gathered hundreds of young Y Combinator protégés for an annual glamping retreat among the redwoods of Mendocino County. The night before, he had beaten a group of Y Combinator staffers at Settlers of Catan. Now, standing before them, he announced that his interests had narrowed—from, roughly, all of technology to three subjects that he believed could fundamentally change humanity: nuclear energy, pandemics, and, most profound of all, machine superintelligence.
That same month, Altman sent an e-mail to Musk. “Been thinking a lot about whether it’s possible to stop humanity from developing AI,” he wrote. “I think the answer is almost definitely not. If it’s going to happen anyway, it seems like it would be good for someone other than Google to do it first.” Altman proposed his Manhattan Project for A.I. so that the technology, as he put it, would “belong to the world,” through some form of nonprofit. Musk replied, “probably worth a conversation.”
It fell to Chuck Schumer, of all people, to offer the secular-liberal benediction for the project—by then consolidated as OpenAI and led by Altman, who had sidelined Musk. “You’re doing important work,” the New York senator told the company’s employees, seated near a TV projecting a fire, during an off-the-record visit to OpenAI’s headquarters in 2019, as Hao documents. “We don’t fully understand it, but it’s important.” Schumer went on, “And I know Sam. You’re in good hands.”
How do people working in A.I. view the technology? The standard account, one that Hao follows, divides them into two camps: the boomers, who are optimistic about AI’s potential benefits for humanity and want to accelerate its development, and the doomers, who emphasize existential risk and edge toward paranoia. OpenAI, in its original conception, was partially a doomer project. Musk’s particular fear about Demis Hassabis was that, if Google assigned a potential A.G.I. the goal of maximizing profits, it might try to take out its competitors at any cost. OpenAI was meant to explore this technological frontier in order to keep it out of malign hands.
But in early 2018 Musk left. The organization was struggling to raise funds—he had pledged to raise a billion dollars but ultimately contributed less than forty-five million—and a faction within OpenAI was pushing to convert it to a for-profit entity, both to attract capital and to lure top researchers with equity. At the meeting where Musk announced his departure, he gave contradictory explanations: OpenAI, he said, wouldn’t be able to build an A.G.I. as a nonprofit, and that Tesla had more resources to pursue this goal, but he also suggested that the best place to pursue A.G.I. was elsewhere. An intern pointed out that Musk had insisted that the for-profit dynamic would undermine safety in developing A.G.I. “Isn’t this going back to what you said you didn’t want to do?” he asked. “You can’t imagine how much time I’ve spent thinking about this,” Musk replied. “I’m truly scared about this issue.” He also called the intern a jackass.
As OpenAI evolved into a nonprofit with a for-profit subsidiary, it came to house both perspectives: a doomer group focussed on safety and research, whose principal advocate was the Italian American scientist Dario Amodei; and a boomer culture focussed on products and applications, often led by Greg Brockman, an M.I.T. dropout and software engineer who pushed the organization toward embracing commercialization. But these lines crossed. Amodei ultimately left the company, alongside his sister, Daniela, insisting that OpenAI had abandoned its founding ethos, though, in Hao’s view, the company they founded, Anthropic, would “in time show little divergence” from OpenAI’s model: the same fixation on scale, the same culture of secrecy. From the other direction came Ilya Sutskever, who had made a major breakthrough in A.I. research as a graduate student in Toronto, and who would become perhaps OpenAI’s most influential theorist. He had once been an unabashed boomer. “I think that it’s fairly likely,” he told the A.I. journalist Cade Metz, “that it will not take too long of a time for the entire surface of the Earth to become covered with data centers and power stations.” By 2023, however, when he helped orchestrate a briefly successful corporate coup against Altman, he was firmly aligned with the doomers. The trajectories of Sutskever and the Amodeis suggest a more fluid category—the boomer-doomers.
Those who most believe in a cause and those who most fear it tend to share one essential assessment: they agree on its power. In this case, the prospect of a technology that could end a phase of civilization drew both camps—boomers and doomers—toward the same flame. Helen Toner, an A.I.-safety expert and academic who eventually joined OpenAI’s board, had spent time studying the fast-evolving A.I. scene in China, the United States’ chief rival in the global race. As Hagey recounts, “Among the things she found notable in China was how reluctant AI engineers were to discuss the social implications of what they were doing. In the Bay Area, meanwhile, they seemed to want to do nothing but.”
Yet OpenAI’s success hinged less on speculative philosophies than on more familiar systems: the flexibility of American capital, and Altman’s personal charm. In 2018, while attending the Sun Valley Conference, in Idaho, Altman ran into Microsoft’s C.E.O., Satya Nadella, in a stairwell and pitched him on a collaboration. Though Bill Gates was skeptical, most of Nadella’s team was enthusiastic. Within a year, Microsoft had announced an investment of a billion dollars in OpenAI—much of it in the form of credits on its cloud platform, Azure. That figure later rose beyond ten billion. Hao speaks with a Chinese A.I. researcher who puts it plainly: “In China, which rivals the U.S. in AI talent, no team of researchers and engineers, no matter how impressive, would get $1 billion, let alone ten times more, to develop a massively expensive technology without an articulated vision of exactly what it would look like and what it would be good for.”
Nadella appears only in passing in both of these books—he’s the adult in the room, and adults are famously not so interesting. But after Microsoft’s multibillion-dollar investments, his influence over OpenAI has come to appear at least as consequential as Altman’s. It was Nadella, after all, who intervened to end the brief 2023 coup, after which Altman was swiftly reinstalled as C.E.O. The year before, Sutskever remarked that “it may be that today’s neural networks are slightly conscious”—a comment to which a scientist at a rival A.I. company replied, “In the same sense that it may be that a large field of wheat is slightly pasta.” Nadella, by contrast, seems broadly allergic to boomer-doomer metaphysics.
The deeper dynamic of contemporary artificial intelligence may be that it reflects, rather than transcends, the corporate conditions of its creation—just as Altman mirrored the manners of his Silicon Valley elders, or as a chatbot’s replies reflect the texts it has been trained on. Appearing recently on Dwarkesh Patel’s influential tech podcast, Nadella, a smooth and upbeat presence, dismissed A.G.I. as a meaningless category. When Patel pressed him on whether A.I. agents would eventually take over not only manual labor but cognitive work, Nadella replied that this might be for the best: “Who said my life’s goal is to triage my e-mail, right? Let an A.I. agent triage my e-mail. But after having triaged my e-mail, give me a higher-level cognitive-labor task of, hey, these are the three drafts I really want you to review.” And if it took over that second thing? Nadella said, “There will be a third thing.”
Nadella seemed quite convinced that A.I. remains a normal technology, and his instinct was to try to narrow each question, so that he was debating project architecture rather than philosophy. When Patel wondered if Nadella would add an A.I. agent to Microsoft’s board, a fairly dystopian-sounding proposition, Nadella replied that Microsoft engineers were currently experimenting with an A.I. agent in Teams, to organize and redirect human team members, and said that he could see the use of having such an agent on Microsoft’s board. It did sound a bit less scary, and also maybe a bit less interesting.
Much like Altman, Nadella is now trying to shift the way the public thinks about A.I. by changing the way it’s talked about—less science fiction, more office productivity. It’s an uphill fight, and at least partly the industry’s own fault. The early, very public bouts of boomerism and doomerism helped attract investment and engineering talent, but they also seeded a broad, low-level unease. If Sutskever—who knew as much about the technology as anyone—could declare it “slightly conscious,” it becomes markedly harder for Nadella, three years later, to reassure the public that what we’re really talking about is just helpful new features in Microsoft Teams.
In other ways, too, Altman is contending with a shifting cultural tide. Sometime around 2016, the tone of tech coverage began to darken. The hagiographic mode gave way to a more prosecutorial one. David Kirkpatrick’s “The Facebook Effect” (2010) has its successor in Sarah Wynn-Williams’s “Careless People” (2025); Michael Lewis’s “The New New Thing” (1999) has been countered by Emily Chang’s “Brotopia” (2018); even Amazon’s great chronicler, Brad Stone, moved from “The Everything Store” (2013) to the more skeptical “Amazon Unbound” (2021).
Hao’s reporting inside OpenAI is exceptional, and she’s persuasive in her argument that the public should focus less on A.I.’s putative “sentience” and more on its implications for labor and the environment. Still, her case against Altman can feel both very personal and slightly overheated. Toward the end of “Empire of AI,” she writes that he has “a long history of dishonesty, power grabbing, and self-serving tactics.” (Welcome to the human race, Sam.) Hao tries hard, if not very successfully, to bolster an accusation made public in 2021 by his sister Annie Altman—that, beginning when she was three and Sam was twelve, he climbed into her bed and molested her, buried memories that she says she recovered during therapy in her twenties. (Altman denies the allegation.) This new, more critical vision of the tech founders risks echoing Musk’s vendetta against Hassabis—inflating contingent figures into supervillains, out of ambient anxiety.
Altman’s story is at once about a man changing artificial intelligence and about how A.I.’s evolving nature has, in turn, changed him—quieting, without resolving, the largest questions about work, power, and the future. Hao’s book opens in late 2023, with the brief ouster of Altman by Sutskever and several senior OpenAI executives, an episode now referred to internally as “the Blip.” When Altman learns of the attempted coup, he is in Las Vegas for a Formula 1 race. Sutskever calls him over Google Meet and tells him that he is being fired. Altman remains serene. He doesn’t appear to take the moment too seriously—perhaps because, in Sutskever’s zeal, he recognizes a version of his former self. Calmly, he replies, “How can I help?” He has become, in every sense, all business.
3 notes
·
View notes
Text
fundamentally you need to understand that the internet-scraping text generative AI (like ChatGPT) is not the point of the AI tech boom. the only way people are making money off that is through making nonsense articles that have great search engine optimization. essentially they make a webpage that’s worded perfectly to show up as the top result on google, which generates clicks, which generates ads. text generative ai is basically a machine that creates a host page for ad space right now.
and yeah, that sucks. but I don’t think the commercialized internet is ever going away, so here we are. tbh, I think finding information on the internet, in books, or through anything is a skill that requires critical thinking and cross checking your sources. people printed bullshit in books before the internet was ever invented. misinformation is never going away. I don’t think text generative AI is going to really change the landscape that much on misinformation because people are becoming educated about it. the text generative AI isn’t a genius supercomputer, but rather a time-saving tool to get a head start on identifying key points of information to further research.
anyway. the point of the AI tech boom is leveraging big data to improve customer relationship management (CRM) to streamline manufacturing. businesses collect a ridiculous amount of data from your internet browsing and purchases, but much of that data is stored in different places with different access points. where you make money with AI isn’t in the Wild West internet, it’s in a structured environment where you know the data its scraping is accurate. companies like nvidia are getting huge because along with the new computer chips, they sell a customizable ecosystem along with it.
so let’s say you spent 10 minutes browsing a clothing retailer’s website. you navigated directly to the clothing > pants tab and filtered for black pants only. you added one pair of pants to your cart, and then spent your last minute or two browsing some shirts. you check out with just the pants, spending $40. you select standard shipping.
with AI for CRM, that company can SIGNIFICANTLY more easily analyze information about that sale. maybe the website developers see the time you spent on the site, but only the warehouse knows your shipping preferences, and sales audit knows the amount you spent, but they can’t see what color pants you bought. whereas a person would have to connect a HUGE amount of data to compile EVERY customer’s preferences across all of these things, AI can do it easily.
this allows the company to make better broad decisions, like what clothing lines to renew, in which colors, and in what quantities. but it ALSO allows them to better customize their advertising directly to you. through your browsing, they can use AI to fill a pre-made template with products you specifically may be interested in, and email it directly to you. the money is in cutting waste through better manufacturing decisions, CRM on an individual and large advertising scale, and reducing the need for human labor to collect all this information manually.
(also, AI is great for developing new computer code. where a developer would have to trawl for hours on GitHUB to find some sample code to mess with to try to solve a problem, the AI can spit out 10 possible solutions to play around with. thats big, but not the point right now.)
so I think it’s concerning how many people are sooo focused on ChatGPT as the face of AI when it’s the least profitable thing out there rn. there is money in the CRM and the manufacturing and reduced labor. corporations WILL develop the technology for those profits. frankly I think the bigger concern is how AI will affect big data in a government ecosystem. internet surveillance is real in the sense that everything you do on the internet is stored in little bits of information across a million different places. AI will significantly impact the government’s ability to scrape and compile information across the internet without having to slog through mountains of junk data.
#which isn’t meant to like. scare you or be doomerism or whatever#but every take I’ve seen about AI on here has just been very ignorant of the actual industry#like everything is abt AI vs artists and it’s like. that’s not why they’re developing this shit#that’s not where the money is. that’s a side effect.#ai#generative ai
9 notes
·
View notes
Text
Oekaki updatez...
Monster Kidz Oekaki is still up and i'd like to keep it that way, but i need to give it some more attention and keep people updated on what's going on/what my plans are for it. so let me jot some thoughts down...
data scraping for machine learning: this has been a concern for a lot of artists as of late, so I've added a robots.txt file and an ai.txt file (as per the opt-out standard proposed by Spawning.ai) to the site in an effort to keep out as many web crawlers for AI as possible. the site will still be indexed by search engines and the Internet Archive. as an additional measure, later tonight I'll try adding "noai", "noimageai", and "noml" HTML meta tags to the site (this would probably be quick and easy to do but i'm soooo sleepy 🛌)
enabling uploads: right now, most users can only post art by drawing in one of the oekaki applets in the browser. i've already given this some thought for a while now, but it seems like artist-oriented spaces online have been dwindling lately, so i'd like to give upload privileges to anyone who's already made a drawing on the oekaki and make a google form for those who haven't (just to confirm who you are/that you won't use the feature maliciously). i would probably set some ground rules like "don't spam uploads"
rules: i'd like to make the rules a little less anal. like, ok, it's no skin off my ass if some kid draws freddy fazbear even though i hope scott cawthon's whole empire explodes. i should also add rules pertaining to uploads, which means i'm probably going to have to address AI generated content. on one hand i hate how, say, deviantart's front page is loaded with bland, tacky, "trending on artstation"-ass AI generated shit (among other issues i have with the medium) but on the other hand i have no interest in trying to interrogate someone about whether they're a Real Artist or scream at someone with the rage of 1,000 scorned concept artists for referencing an AI generated image someone else posted, or something. so i'm not sure how to tackle this tastefully
"Branding": i'm wondering if i should present this as less of a UTDR Oekaki and more of a General Purpose Oekaki with a monster theming. functionally, there wouldn't be much of a difference, but maybe the oekaki could have its own mascot
fun stuff: is having a poll sort of "obsolete" now because of tumblr polls, or should I keep it...? i'd also like to come up with ideas for Things To Do like weekly/monthly art prompts, or maybe games/events like a splatfest/artfight type thing. if you have any ideas of your own, let me know
boring stuff: i need to figure out how to set up automated backups, so i guess i'll do that sometime soon... i should also update the oekaki software sometime (this is scary because i've made a lot of custom edits to everything)
Money: well this costs money to host so I might put a ko-fi link for donations somewhere... at some point... maybe.......
8 notes
·
View notes
Text
interesting dominos falling in the web browser game
google lost their monopoly suit over being essentially the only search engine game in town, and paying for other browsers to have them as the default engine. they've appealed the suit, but it doesn't look good and seems to be mostly a stalling tactic
this has interesting ramifications for firefox. a vast majority of firefox's funding comes from google - payments for firefox to integrate google into their browser.
google's penalty is up in the air, but it's pretty likely it will involve no longer being allowed to pay other browsers to integrate google by default. which means firefox's funding absolutely craters
this makes mozilla's recent corporate coup and restructuring into becoming partially an ad company, along with the new advertising telemetry and data-scraping "features" in firefox, make a lot more sense.
more dominos, though: firefox is essentially google's token to point at and say, "hey, look, not every major browser is based on our browser! see, mozilla has 3% market share!"
if firefox fails, google faces another potential monopoly suit - this time in the web browser market.
there's a lot up in the air, but firefox could end up being the sacrificial lamb for a better and more diverse web browsing landscape. all of these potential ramifications are years away, though, but it'll be interesting to keep up with
4 notes
·
View notes
Note
If youre changing your search engine as well I prefer duck duck go, it’s the antithesis of all of Google’s tracking and data scraping
i may use it, but this is entirely personal and in my head, the first person i knew who told me to use it was my photography professor in college who was one of those professors that made everyone meditate before class so that kind of has colored my view of the vibe of ddg
4 notes
·
View notes
Text
i get why everybody hates it, but i just saw someone threaten to delete their account "if tumblr scrapes my data in the future" and, stranger, it's BEEN getting scraped! someone's scraping it right now. google analytics has been scraping and reselling tumblr data since it registered its fucking domain name. if your posts are findable by a search engine, a hundred different companies have taken a big rake to all your posts, dumped the data they scraped into a big burlap sack with a bunch of other blog shavings and resold it for wayyy more than it's actually worth to every company that puts up online ads. this has been true about every app and personal website for decades. the only difference with the most recent update is automattic has somehow tricked midjourney into throwing some of their bottomless startup money into Wordpress Guy's pockets.
2 notes
·
View notes
Text
Learn about negative SEO tactics and how to protect your website from malicious actions
In today’s highly competitive online landscape, businesses and website owners face not only the challenge of optimizing their websites for search engines but also the threat of negative SEO tactics. Negative SEO refers to the practice of using unethical and malicious strategies to harm a competitor’s website’s search engine rankings and online reputation. This dark side of search engine optimization can lead to devastating consequences for innocent website owners.
In this article, we will explore various negative SEO tactics and provide valuable insights on how to safeguard your website from such attacks.
Link Spamming and Manipulation
One of the most common negative SEO tactics is the mass creation of low-quality, spammy backlinks pointing to a targeted website. These malicious backlinks can lead search engines to believe that the website is engaging in link schemes, resulting in penalties and ranking drops. Website owners must regularly monitor their backlink profiles to identify and disavow any toxic links.
Content Scraping and Duplication
Content scraping involves copying content from a target website and republishing it on multiple other sites without permission. This can lead to duplicate content issues, harming the original website’s search rankings. Regularly monitoring your content for plagiarism and submitting DMCA takedown requests can help address this problem.
Fake Negative Reviews
Negative SEO attackers may leave fake negative reviews on review sites and business directories to damage a website’s reputation. Monitoring and responding to reviews promptly can help mitigate the impact of such attacks.
Distributed Denial of Service (DDoS) Attacks
DDoS attacks overload a website’s server with an excessive amount of traffic, causing it to become slow or crash. Implementing DDoS protection services can help safeguard your website against such attacks.
Regularly Monitor Backlinks
Use tools like Google Search Console and third-party SEO software to monitor your website’s backlink profile. Regularly review and disavow toxic links to prevent negative SEO attacks based on link spamming.
Secure Your Website
Ensure your website is secure with HTTPS encryption and robust security measures. This will help protect your website from hacking attempts and potential negative SEO attacks like content manipulation.
Frequently Check for Duplicate Content
Use plagiarism checker tools to identify if your content has been copied elsewhere. If you find duplicate content, reach out to the website owners to request removal or use the Google DMCA process.
Implement Review Monitoring
Keep an eye on reviews and mentions of your brand across various platforms. Respond professionally to negative reviews and report fake reviews to the respective platforms for removal.
Optimize Website Performance
A fast-loading website can better withstand DDoS attacks. Optimize your website’s performance by compressing images, using caching, and leveraging Content Delivery Networks (CDNs).
Regularly Backup Your Website
Frequent website backups will ensure that even if an attack occurs, you can quickly restore your website to its previous state without losing valuable data.
Use Webmaster Tools and Analytics
Stay vigilant by setting up alerts in Google Webmaster Tools and Google Analytics. These alerts can notify you of sudden drops in website traffic or other suspicious activities.
Conclusion
As the digital landscape continues to evolve, negative SEO tactics remain a persistent threat. Understanding these malicious strategies and proactively taking steps to protect your website is crucial for every website owner.
Discover countermeasures against negative SEO tactics, safeguarding your site from harm. Shield your website with insights from an experienced SEO company in Chandigarh for robust defense strategies.
2 notes
·
View notes
Text
Why I'm always thinking of going back to a "dumb" feature phone since they still sell newer models without the "smart" shite on Android or ios. There's always the option of rooting your phone and flashing a custom ROM. What's needed is something to come along for smart phones and do what linux has done for desktop computers. There are linux phones but they're not always compatible with your sim, camera might not always work etc. I wouldn't mind installing Graphene but fuck buying from Google to do it (the irony).
Also, most of the public will complain about this on modern phones and desktops then blame capitalism, corporations etc. Which is valid, but they'll also forget their own complicity in this. This situation where we have giga-corporations with God-like power stuffing spyware and AI slop into everything, this is what you get when hordes of normies aggressively supported those corporations throughout the 2010's. You can't have your cake and eat it too. You can't buy into corporate convenience culture, stubbornly refuse to change any services you use, even after enshittification, then complain about the consequences.
One of the most frustrating things is listening to normies whinging about the consequences of their decadence, stuck to their phones constantly, using apps to pay for absolutely everything etc then complain when Google et al have complete control over their digital life plus AI everywhere. Yet when you try and suggest an alternative they roll their eyes and treat you like a tin foil hat wearing fruitcake or just lose interest. Switching to Linux? Changing what browser or search engine you use? Paying in cash/ in person? Not relying on social media to form your opinions for you? Actually taking steps to protect your privacy and prevent your data being scraped by AI companies? "Oh no, too much bother. I'm going to whinge like fuck and still stubbornly refuse to stop using services from the likes of Google as much as possible".
God I hate how normalized not being in control of your own devices has become. My phone updates in the middle of the night without asking me shit or getting my consent for anything and its like "Oh hi I'm your new AI, please enjoy this forced overlay that you can't exit out of until you go through my tutorial"
"Great fuck you, I would like to uninstall you" "Oh I'm sorry you can't uninstall me! I'm a core system application and if you uninstall me your phone won't function correctly despite the fact that I did not exist yesterday and your phone worked fine" "....." "You can disable parts of my functionality but I will always be here and I will pop up notifications asking you to re-enable me unless you figure out how to disable those too! Then I will still show up in a different color at the top of your settings application telling you that you need to 'fix" a 'problem' with your phone, that problem being that I am disabled. Does that help?"
Like, you know what I can do on my desktop? "sudo pacman -Rdd linux" , this will just fucking remove the entire linux kernel. Fundamentally breaking my computer until I boot up a live disk and chroot in and reinstall it or whatever, and the computer will go "Are you sure (y/n)" or whatever and i'm like "y" and it will just go "Ok you got it boss"
But its mine, I get to do what I want with it. I control the computer, the computer does not control me. I refuse to cede control to my phone or anything else. The thing is a lot of people will joke that like "Oh I love just letting the machine tell me what to do, I don't know what I'm doing, it knows best" or whatever but the thing you have to realize is that when you say that you are abstracting away that "the phone" or whatever is not some value neutral logic driven robot like from sci-fi, it is a collection of the the capitalistic and fascistic desires of the tech oligarch fuckwits that are burning the world to the ground right now. You aren't submitting to the phone, you are submitting to Musk, Bezos, Nadella, Pichai, Cook and all those other evil bastards.
Fuck them, fuck their little AI toys, and fuck this.
5K notes
·
View notes