#how to make money with chatgpt in 2023
Explore tagged Tumblr posts
Text
AI turns Amazon coders into Amazon warehouse workers

HEY SEATTLE! I'm appearing at the Cascade PBS Ideas Festival NEXT SATURDAY (May 31) with the folks from NPR's On The Media!
On a recent This Machine Kills episode, guest Hagen Blix described the ultimate form of "AI therapy" with a "human in the loop":
https://soundcloud.com/thismachinekillspod/405-ai-is-the-demon-god-of-capital-ft-hagen-blix
One actual therapist is just having ten chat GPT windows open where they just like have five seconds to interrupt the chatGPT. They have to scan them all and see if it says something really inappropriate. That's your job, to stop it.
Blix admits that's not where therapy is at…yet, but he references Laura Preston's 2023 N Plus One essay, "HUMAN_FALLBACK," which describes her as a backstop to a real-estate "virtual assistant," that masqueraded as a human handling the queries that confused it, in a bid to keep the customers from figuring out that they were engaging with a chatbot:
https://www.nplusonemag.com/issue-44/essays/human_fallback/
This is what makes investors and bosses slobber so hard for AI – a "productivity" boost that arises from taking away the bargaining power of workers so that they can be made to labor under worse conditions for less money. The efficiency gains of automation aren't just about using fewer workers to achieve the same output – it's about the fact that the workers you fire in this process can be used as a threat against the remaining workers: "Do your job and shut up or I'll fire you and give your job to one of your former colleagues who's now on the breadline."
This has been at the heart of labor fights over automation since the Industrial Revolution, when skilled textile workers took up the Luddite cause because their bosses wanted to fire them and replace them with child workers snatched from Napoleonic War orphanages:
https://pluralistic.net/2023/09/26/enochs-hammer/#thats-fronkonsteen
Textile automation wasn't just about producing more cloth – it was about producing cheaper, worse cloth. The new machines were so easy a child could use them, because that's who was using them – kidnapped war orphans. The adult textile workers the machines displaced weren't afraid of technology. Far from it! Weavers used the most advanced machinery of the day, and apprenticed for seven years to learn how to operate it. Luddites had the equivalent of a Masters in Engineering from MIT.
Weavers' guilds presented two problems for their bosses: first, they had enormous power, thanks to the extensive training required to operate their looms; and second, they used that power to regulate the quality of the goods they made. Even before the Industrial Revolution, weavers could have produced more cloth at lower prices by skimping on quality, but they refused, out of principle, because their work mattered to them.
Now, of course weavers also appreciated the value of their products, and understood that innovations that would allow them to increase their productivity and make more fabric at lower prices would be good for the world. They weren't snobs who thought that only the wealthy should go clothed. Weavers had continuously adopted numerous innovations, each of which increased the productivity and the quality of their wares.
Long before the Luddite uprising, weavers had petitioned factory owners and Parliament under the laws that guaranteed the guilds the right to oversee textile automation to ensure that it didn't come at the price of worker power or the quality of the textiles the machines produced. But the factory owners and their investors had captured Parliament, which ignored its own laws and did nothing as the "dark, Satanic mills" proliferated. Luddites only turned to property destruction after the system failed them.
Now, it's true that eventually, the machines improved and the fabric they turned out matched and exceeded the quality of the fabric that preceded the Industrial Revolution. But there's nothing about the way the Industrial Revolution unfolded – increasing the power of capital to pay workers less and treat them worse while flooding the market with inferior products – that was necessary or beneficial to that progress. Every other innovation in textile production up until that time had been undertaken with the cooperation of the guilds, who'd ensured that "progress" meant better lives for workers, better products for consumers, and lower prices. If the Luddites' demands for co-determination in the Industrial Revolution had been met, we might have gotten to the same world of superior products at lower costs, but without the immiseration of generations of workers, mass killings to suppress worker uprisings, and decades of defective products being foisted on the public.
So there are two stories about automation and labor: in the dominant narrative, workers are afraid of the automation that delivers benefits to all of us, stand in the way of progress, and get steamrollered for their own good, as well as ours. In the other narrative, workers are glad to have boring and dangerous parts of their work automated away and happy to produce more high-quality goods and services, and stand ready to assess and plan the rollout of new tools, and when workers object to automation, it's because they see automation being used to crush them and worsen the outputs they care about, at the expense of the customers they care for.
In modern automation/labor theory, this debate is framed in terms of "centaurs" (humans who are assisted by technology) and "reverse-centaurs" (humans who are conscripted to assist technology):
https://pluralistic.net/2023/04/12/algorithmic-wage-discrimination/#fishers-of-men
There are plenty of workers who are excited at the thought of using AI tools to relieve them of some drudgework. To the extent that these workers have power over their bosses and their working conditions, that excitement might well be justified. I hear a lot from programmers who work on their own projects about how nice it is to have a kind of hypertrophied macro system that can generate and tweak little automated tools on the fly so the humans can focus on the real, chewy challenges. Those workers are the centaurs, and it's no wonder that they're excited about improved tooling.
But the reverse-centaur version is a lot darker. The reverse-centaur coder is an assistant to the AI, charged with being a "human in the loop" who reviews the material that the AI produces. This is a pretty terrible job to have.
For starters, the kinds of mistakes that AI coders make are the hardest mistakes for human reviewers to catch. That's because LLMs are statistical prediction machines, spicy autocomplete that works by ingesting and analyzing a vast corpus of written materials and then producing outputs that represent a series of plausible guesses about which words should follow one another. To the extent that the reality the AI is participating in is statistically smooth and predictable, AI can often make eerily good guesses at words that turn into sentences or code that slot well into that reality.
But where reality is lumpy and irregular, AI stumbles. AI is intrinsically conservative. As a statistically informed guessing program, it wants the future to be like the past:
https://reallifemag.com/the-apophenic-machine/
This means that AI coders stumble wherever the world contains rough patches and snags. Take "slopsquatting." For the most part, software libraries follow regular naming conventions. For example, there might be a series of text-handling libraries with names like "text.parsing.docx," "text.parsing.xml," and "text.parsing.markdown." But for some reason – maybe two different projects were merged, or maybe someone was just inattentive – there's also a library called "text.txt.parsing" (instead of "text.parsing.txt").
AI coders are doing inference based on statistical analysis, and anyone inferring what the .txt parsing library is called would guess, based on the other libraries, that it was "text.parsing.txt." And that's what the AI guesses, and so it tries to import that library to its software projects.
This creates a new security vulnerability, "slopsquatting," in which a malicious actor creates a library with the expected name, which replicates the functionality of the real library, but also contains malicious code:
https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/
Note that slopsquatting errors are extremely hard to spot. As is typical with AI coding errors, these are errors that are based on continuing a historical pattern, which is the sort of thing our own brains do all the time (think of trying to go up a step that isn't there after climbing to the top of a staircase). Notably, these are very different from the errors that a beginning programmer whose work is being reviewed by a more senior coder might make. These are the very hardest errors for humans to spot, and these are the errors that AIs make the most, and they do so at machine speed:
https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs
To be a human in the loop for an AI coder, a programmer must engage in sustained, careful, line-by-line and command-by-command scrutiny of the code. This is the hardest kind of code to review, and maintaining robotic vigilance over long periods at high speeds is something humans are very bad at. Indeed, it's the kind of task we try very hard to automate, since machines are much better at being machineline than humans are. This is the essence of reverse-centaurism: when a human is expected to act like a machine in order to help the machine do something it can't do.
Humans routinely fail at spotting these errors, unsurprisingly. If the purpose of automation is to make superior goods at lower prices, then this would be a real concern, since a reverse-centaur coding arrangement is bound to produce code with lurking, pernicious, especially hard-to-spot bugs that present serious risks to users. But if the purpose of automation is to discipline labor – to force coders to accept worse conditions and pay – irrespective of the impact on quality, then AI is the perfect tool for the job. The point of the human isn't to catch the AI's errors so much as it is to catch the blame for the AI's errors – to be what Madeleine Clare Elish calls a "moral crumple zone":
https://estsjournal.org/index.php/ests/article/view/260
As has been the case since the Industrial Revolution, the project of automation isn't just about increasing productivity, it's about weakening labor power as a prelude to lowering quality. Take what's happened to the news industry, where mass layoffs are being offset by AI tools. At Hearst's King Features Syndicates, a single writer was charged with producing over 30 summer guides, the entire package:
https://www.404media.co/viral-ai-generated-summer-guide-printed-by-chicago-sun-times-was-made-by-magazine-giant-hearst/
That is an impossible task, which is why the writer turned to AI to do his homework, and then, infamously, published a "summer reading guide" that was full of nonexistent books that were hallucinated by a chatbot:
https://www.404media.co/chicago-sun-times-prints-ai-generated-summer-reading-list-with-books-that-dont-exist/
Most people reacted to this story as a consumer issue: they were outraged that the world was having a defective product foisted upon it. But the consumer issue here is downstream from the labor issue: when the writers at King Features Syndicate are turned into reverse-centaurs, they will inevitably produce defective outputs. The point of the worker – the "human in the loop" – isn't to supervise the AI, it's to take the blame for the AI. That's just what happened, as this poor schmuck absorbed an internet-sized rasher of shit flung his way by outraged social media users. After all, it was his byline on the story, not the chatbot's. He's the moral crumple-zone.
The implication of this is that consumers and workers are class allies in the automation wars. The point of using automation to weaken labor isn't just cheaper products – it's cheaper, defective products, inflicted on the unsuspecting and defenseless public who are no longer protected by workers' professionalism and pride in their jobs.
That's what's going on at Duolingo, where CEO Luis von Ahn created a firestorm by announcing mass firings of human language instructors, who would be replaced by AI. The "AI first" announcement pissed off Duolingo's workers, of course, but what caught von Ahn off-guard was how much this pissed off Duolingo's users:
https://tech.slashdot.org/story/25/05/25/0347239/duolingo-faces-massive-social-media-backlash-after-ai-first-comments
But of course, this makes perfect sense. After all, language-learners are literally incapable of spotting errors in the AI instruction they receive. If you spoke the language well enough to spot the AI's mistakes, you wouldn't need Duolingo! I don't doubt that there are countless ways in which AIs could benefit both language learners and the Duolingo workers who develop instructional materials, but for that to happen, workers' and learners' needs will have to be the focus of AI integration. Centaurs could produce great language learning materials with AI – but reverse-centaurs can only produce slop.
Unsurprisingly, many of the most successful AI products are "bossware" tools that let employers monitor and discipline workers who've been reverse-centaurized. Both blue-collar and white-collar workplaces have filled up with "electronic whips" that monitor and evaluate performance:
https://pluralistic.net/2024/08/02/despotism-on-demand/#virtual-whips
AI can give bosses "dashboards" that tell them which Amazon delivery drivers operate their vehicles with their mouths open (Amazon doesn't let its drivers sing on the job). Meanwhile, a German company called Celonis will sell your boss a kind of AI phrenology tool that assesses your "emotional quality" by spying on you while you work:
https://crackedlabs.org/en/data-work/publications/processmining-algomanage
Tech firms were among the first and most aggressive adopters of AI-based electronic whips. But these whips weren't used on coders – they were reserved for tech's vast blue-collar and contractor workforce: clickworkers, gig workers, warehouse workers, AI data-labelers and delivery drivers.
Tech bosses tormented these workers but pampered their coders. That wasn't out of any sentimental attachment to tech workers. Rather, tech bosses were afraid of tech workers, because tech workers possess a rare set of skills that can be harnessed by tech firms to produce gigantic returns. Tech workers have historically been princes of labor, able to command high salaries and deferential treatment from their bosses (think of the amazing tech "campus" perks), because their scarcity gave them power.
It's easy to predict how tech bosses would treat tech workers if they could get away with it – just look how they treat workers they aren't afraid of. Just like the textile mill owners of the Industrial Revolution, the thing that excites tech bosses about AI is the possibility of cutting off a group of powerful workers at the knees. After all, it took more than a century for strong labor unions to match the power that the pre-Industrial Revolution guilds had. If AI can crush the power of tech workers, it might buy tech bosses a century of free rein to shift value from their workforce to their investors, while also doing away with pesky Tron-pilled workers who believe they have a moral obligation to "fight for the user."
William Gibson famously wrote, "The future is here, it's just not evenly distributed." The workers that tech bosses don't fear are living in the future of the workers that tech bosses can't easily replace.
This week, the New York Times's veteran Amazon labor report Noam Scheiber published a deeply reported piece about the experience of coders at Amazon in the age of AI:
https://www.nytimes.com/2025/05/25/business/amazon-ai-coders.html
Amazon CEO Andy Jassy is palpably horny for AI coders, evidenced by investor memos boasting of AI's returns in "productivity and cost avoidance" and pronouncements about AI saving "the equivalent of 4,500 developer-years":
https://www.linkedin.com/posts/andy-jassy-8b1615_one-of-the-most-tedious-but-critical-tasks-activity-7232374162185461760-AdSz/
Amazon is among the most notorious abusers of blue-collar labor, the workplace where everyone who doesn't have a bullshit laptop job is expected to piss in a bottle and spend an unpaid hour before and after work going through a bag- and body-search. Amazon's blue-collar workers are under continuous, totalizing, judging AI scrutiny that scores them based on whether their eyeballs are correctly oriented, whether they take too long to pick up an object, whether they pee too often. Amazon warehouse workers are injured at three times national average. Amazon AIs scan social media for disgruntled workers talking about unions, and Amazon has another AI tool that predicts which shops and departments are most likely to want to unionize.
Scheiber's piece describes what it's like to be an Amazon tech worker who's getting the reverse-centaur treatment that has heretofore been reserved for warehouse workers and drivers. They describe "speedups" in which they are moved from writing code to reviewing AI code, their jobs transformed from solving chewy intellectual puzzles to racing to spot hard-to-find AI coding errors as a clock ticks down. Amazon bosses haven't ordered their tech workers to use AI, just raised their quotas to a level that can't be attained without getting an AI to do most of the work – just like the Chicago Sun-Times writer who was expected to write all 30 articles in the summer guide package on his own. No one made him use AI, but he wasn't going to produce 30 articles on deadline without a chatbot.
Amazon insists that it is treating AI as an assistant for its coders, but the actual working conditions make it clear that this is a reverse-centaur transformation. Scheiber discusses a dissident internal group at Amazon called Amazon Employees for Climate Justice, who link the company's use of AI to its carbon footprint. Beyond those climate concerns, these workers are treating AI as a labor issue.
Amazon's coders have been making tentative gestures of solidarity towards its blue-collar workforce since the pandemic broke out, walking out in support of striking warehouse workers (and getting fired for doing so):
https://pluralistic.net/2020/04/14/abolish-silicon-valley/#hang-together-hang-separately
But those firings haven't deterred Amazon's tech workers from making common cause with their comrades on the shop floor:
https://pluralistic.net/2021/01/19/deastroturfing/#real-power
When techies describe their experience of AI, it sometimes sounds like they're describing two completely different realities – and that's because they are. For workers with power and control, automation turns them into centaurs, who get to use AI tools to improve their work-lives. For workers whose power is waning, AI is a tool for reverse-centaurism, an electronic whip that pushes them to work at superhuman speeds. And when they fail, these workers become "moral crumple zones," absorbing the blame for the defective products their bosses pushed out in order to goose profits.
As ever, what a technology does pales in comparison to who it does it for and who it does it to.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2025/05/27/rancid-vibe-coding/#class-war
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
351 notes
·
View notes
Text
i've been thinking about AI a lot lately, and i know a lot of us are, it's only natural considering that it's forced onto us 24/7 by most search engines, pdf readers, & microsoft and apple, but i think what is increasingly making me crazy, as an academic, college teacher, and grad student, is the forcible cramming of it into our everyday lives and social institutions.
no one asked for this technology -- and that's what's so alarming to me.
technology once RESPONDED to the needs and intuitions of a society. but no one needed AI, at least not in the terrifying technocratic data mining atrophying cognitive thought that it's evolving into, and no asked for this paradigm shift to a digital shitty algorithm that we don't understand.
it's different from when the iphone came out and started a revolution where pretty much everyone needed a smartphone. there was an integration -- i remember the first iphone commercial and release news. it wasn't so sudden, but it was probably inevitable given the evolution of the internet and technology that everyone would have a smartphone.
what i know about AI is this: from the first 6 months of ChatGPT's release, they have tried to say it is INEVITABLE.
I walked into my classroom in Fall of 2023 to a room full of 18 year-olds, and suddenly, they were all using it. they claimed it helped them "fill in the gaps" of things they didn't understand about writing. i work with 4th year college students applying to med school -- they use "chat" to help them "come up with sentences they couldn't come up with on their own." i work with a 3rd year pharmacy school student applying to a fellowship who doesn't speak english as a primary language and he's using "AI to sound more American." i receive a text from an ex-boyfriend about how he 'told ChatGPT to write a poem about me.' (it's supposed to be funny. it's not.) i'm at a coffee shop listening to two women talk about how they use ChatGPT to write emails and cut down on the amount of hours they do everyday. i scroll past an AI generated advertisement that could have been made with a graphic designer. i'm watching as a candidate up for the job of the new dean to the college of arts and sciences at my university announces that AI should be the primary goal of humanities departments -- "if you're a faculty member and you're not able to say how you USE AI in your classroom, then you're wasting the university's time and money." i'm at a seminar in DC where colleagues of mine -- fellow teachers and grad students -- are exclaiming excitedly, "I HATE AI don't get me wrong, but it's helpful for sharpening my students' visual analytical skills." i'm watching as US congressional republicans try to pass a law that puts no federal oversight on AI for ten years. i'm watching a YouTube video of a woman talking about Meta's AI data center in her backyard that has basically turned her water pressure to a trickle. i'm reading an article about how OpenAI founder, Sam Altman, claims that ChatGPT can rival someone with a PhD. i'm a year and half away, after a decade of work, from achieving a PhD.
billionaires in silicon valley made us -- and my students -- think that AI is responding to a specific technological dearth: it makes things easier. it helps us understand a language we don't speak. it helps us write better. it helps us make sense of a world we don't understand. it helps us sharpen our skills. it helps us write an email faster. it helps us shorten the labor and make the load lighter. it helps us make art and music and literature.
the alarming thing is -- it is responding to a need, but not the one they think. it's responding to a need that we are overworked. it's responding to a need that the moral knowledge we need to possess is vast, complicated, and unknowable in its entirety. it's responding to a need that emails fucking suck. it's responding to a need that art and music, which the same tech and engineering bros once claimed were pointless ventures, are hard to think about and difficult to create. it's responding to the need that we need TIME, and in capitalism, there is rarely enough for us to create and study art that cannot be sold and bought for the sake of getting someone rich.
AI is not what you think it is -- of course, it is stupid, it is dumb, and i fucking hate it as much as the next guy, but it is a red fucking flag. not even mentioning the climate catastrophe that it's fast tracking, AI tech companies by and large want us to believe that there isn't time, that there isn't a point to doing the things that TAKE time, that there isn't room for figuring out things that are hard and grey and big and complicated. BUT WORTH, FUCKING, DOING.
but there is. THERE ALWAYS IS. don't let them make you think that the work and things you love are NOT worth doing. AI is NOT inevitable and it does NOT have to be the technological revolution that they want us to think it is.
MAKE ART.
ASK QUESTIONS.
STUDY ART.
DO IT BAD; DO IT SHITTY.
FUCK AI FOREVER.
#anti ai#ai rant#fuck ai#long post#i know that ai could be used for good#but in my opinion lol#it's definitely not being used for those reasons#if someone can point outside of the three examples ai has been used in the health sciences for good then i'll believe you#humanities#higher education#make art#do it bad#ai
17 notes
·
View notes
Text
The cryptocurrency hype of the past few years already started to introduce people to these problems. Despite producing little to no tangible benefits — unless you count letting rich people make money off speculation and scams — Bitcoin consumed more energy and computer parts than medium-sized countries and crypto miners were so voracious in their energy needs that they turned shuttered coal plants back on to process crypto transactions. Even after the crypto crash, Bitcoin still used more energy in 2023 than the previous year, but some miners found a new opportunity: powering the generative AI boom. The AI tools being pushed by OpenAI, Google, and their peers are far more energy intensive than the products they aim to displace. In the days after ChatGPT’s release in late 2022, Sam Altman called its computing costs “eye-watering” and several months later Alphabet chairman John Hennessy told Reuters that getting a response from Google’s chatbot would “likely cost 10 times more” than using its traditional search tools. Instead of reassessing their plans, major tech companies are doubling down and planning a massive expansion of the computing infrastructure available to them.
[...]
As the cloud took over, more computation fell into the hands of a few dominant tech companies and they made the move to what are called “hyperscale” data centers. Those facilities are usually over 10,000 square feet and hold more than 5,000 servers, but those being built today are often many times larger than that. For example, Amazon says its data centers can have up to 50,000 servers each, while Microsoft has a campus of 20 data centers in Quincy, Washington with almost half a million servers between them. By the end of 2020, Amazon, Microsoft, and Google controlled half of the 597 hyperscale data centres in the world, but what’s even more concerning is how rapidly that number is increasing. By mid-2023, the number of hyperscale data centres stood at 926 and Synergy Research estimates another 427 will be built in the coming years to keep up with the expansion of resource-intensive AI tools and other demands for increased computation. All those data centers come with an increasingly significant resource footprint. A recent report from the International Energy Agency (IEA) estimates that the global energy demand of data centers, AI, and crypto could more than double by 2026, increasing from 460 TWh in 2022 to up to 1,050 TWh — similar to the energy consumption of Japan. Meanwhile, in the United States, data center energy use could triple from 130 TWh in 2022 — about 2.5% of the country’s total — to 390 TWh by the end of the decade, accounting for a 7.5% share of total energy, according to Boston Consulting Group. That’s nothing compared to Ireland, where the IEA estimates data centers, AI, and crypto could consume a third of all power in 2026, up from 17% in 2022. Water use is going up too: Google reported it used 5.2 billion gallons of water in its data centers in 2022, a jump of 20% from the previous year, while Microsoft used 1.7 billion gallons in its data centers, an increase of 34% on 2021. University of California, Riverside researcher Shaolei Ren told Fortune, “It’s fair to say the majority of the growth is due to AI.” But these are not just large abstract numbers; they have real material consequences that a lot of communities are getting fed up with just as the companies seek to massively expand their data center footprints.
9 February 2024
#ai#artificial intelligence#energy#big data#silicon valley#climate change#destroy your local AI data centre
75 notes
·
View notes
Text
Musk's Social Security Claim
Social Security fraud does exist, but it's relatively small compared to the total benefits paid. The Social Security Administration (SSA) has multiple fraud prevention measures, but errors and fraud still happen.
How Much Money Is Lost to Fraud?
1. Annual Estimated Losses
The SSA’s Office of the Inspector General (OIG) investigates fraud cases. In 2023, their reports estimated that around $8 billion in improper payments occurred. However, not all of this is fraud—many are errors (such as overpayments due to outdated records).
True fraud cases (deliberate deception) are harder to quantify but likely in the range of hundreds of millions rather than billions.
2. Types of Social Security Fraud
Receiving Benefits for a Deceased Person – Sometimes, relatives fail to report a beneficiary’s death and continue collecting payments.
Disability Fraud – People who falsely claim to be disabled while working or living a normal life.
Identity Theft – Criminals use stolen Social Security numbers to claim benefits fraudulently.
Representative Payee Fraud – A person managing benefits for someone else misuses the funds.
3. SSA’s Response to Fraud
The SSA cross-checks records with government death databases to prevent improper payments.
Banks are required to report deaths, and any checks sent to deceased individuals are supposed to be returned.
In 2022, the OIG recovered over $100 million in fraud cases.
How Big Is the Fraud Problem?
Compared to the $1.4 trillion Social Security pays out yearly, the fraud and improper payments make up less than 1% of total spending. While it’s a concern, it’s not a major drain on the system.
By ChatGPT
#quote of the day#ChatGPT#ai#quotes#elon musk#doge#us government#social security#retirement#money#political#taxes#taxpayer money#democrats#republicans#trump#biden#fraud#corruption
7 notes
·
View notes
Text
SPECIFIC/UNSPECIFIC MANIFESTORS
Hey y'all.
In May of 2023 I affirmed almost only 1 affirmation "all my wishes come true easilly and effortlessly". it was the most blessed period of my life since my childhood. i even traveled to Corfu, Greece, which is one of those "travel is hard to get" ones for me, that i have more resistance to usually. it was the most beautiful vacation.
plus, everything in my life went so good, i was journaling every day and i looked through what i was writing and putting pictures of in my goodnotes app on my ipad (another manifestation of mine i will maybe make a post another time) and my life was super happy.
so i decided that this year this will be my only affirmation. because last year i was like yeah well this is great but i want specific things and this is kinda good/awesome things that are coming sort of as a surprise from the universe. but it didn't work as well for me to affirm for specific things, i manifested a lot but it didn t make me as happy. so i decided this year to try again and let the universe decide.

but this leads me to my point: I know of myself what I am in Human Design, an unspecific manifestor. This means exactly what i iterrated here: unspecific affirmations work better for us unspecific manifestors. it's not that we cannot get specific things, it's that we are more 'made' to flow with the abundance of the universe.
for example for me as an unspecific manifestor, i get results faster if i affirm "i have a lot of money" rather then "i have X (specific) sum of money".
and as a specific manifestor, one would better get super speciffic with what they are manifesting.
i think this is an untalked subject in the manifestation world, that trips people up. for example an unspecific manifestor might teach a specific manifestor that "they are better off looking for the feeling of the desire and to affirm with feeling" while a specific manifestor might tell an unspecific to get super specific and manifest simply through mindless repetition, and both won't work the same for each other.
so i started with my affirmation, "all my wishes come true easilly and effortlessly" (no red car, no million dollars, no specific sp, etc - literally, unspecific) 6 days ago and today i started journaling too.
just while i was journaling i noticed things happening around me that made me feel like this aff works so well for me. for ex i didn't like what my brother was listening to and just affirming a few times he left to listen in his apartment. then he came back with soft music and he got an ad for an event that i saw yesterday with my friend and she said it seemed too expensive for her for what they offered (but i secretly wished we could go) . seeing the ad, i told my brother what she said without thinking much of it and he offered to pay for both of us 🥰. coincidence? i think not.
so to find out whether u are a specific or unspecific manifestor in HD u have to look online and ask google or chatgpt how to find out "am i a specific or unspecific manifestor in human design" (because i forgot which arrow in the chart exactly shows u) and then do ur chart with ur date and time of birth.
U can find many interesting things by doing ur chart and sending it to chatgpt, rather than paying for an explanation, but anyways....
good luck! :)

#affirm and persist#affirmations#affirmdaily#affirmyourreality#law of assumption#loa#loassumption#persistence#robotic affirming#human design#manifesting#manifestation
39 notes
·
View notes
Text
can i also just say how FUCKING weird it is that people are using the argument “well, people still use chatgpt!!” when people are legitimately pointing out the ethical issues with duolingo going ai-first.
and before i see one more person say “this doesn’t mean that humans will lose their jobs as contractor roles for duo!!” yes it does. their company will not prioritize keeping HUMANS they have to pay when they can use a robot to do that work for little money cost.
and they can say it will only be used to augment and not replace ai all they want - i don’t buy that for a goddamn second.
here’s a crash course in the difference between the two:
chatgpt has never marketed itself as a company that values human input and relies on that for growth, then suddenly makes a curveball change to be ai-first.
and it’s not because it benefits their company but to save themselves some money so their ceo can continue to be overpaid (quote from salary.com on the ceo’s salary in 2023 - “As Chief Executive Officer at Duolingo, Inc., Luis von Ahn made $766,500 in total compensation..”)
and yes his salary is relevant when the salary of other employees doing the actual contractor jobs at duolingo are being suddenly messed with because they want to save money and prioritize the use of ai.
chatgpt has also always been ai. that is literally the only purpose the app serves is an ai platform. duolingo has not. they have used ai services in the past, however they haven’t (until this news broke) been advertised as an ai-first company.
#maeberzatto#like i’m so genuinely mad over this#AND FYI: IT ISNT BECAUSE THEY VALUE THEIR EMPLOYEES#ITS SO THEY CAN SAVE MONEYYY
2 notes
·
View notes
Text
that article is extremely funny but this bit specifically is bullshit
OpenAI’s shift towards profitability, combined with Sam Altman’s recent public statements indicates a number of things. Although Altman might not prioritize profits, OpenAI does. While OpenAI is routinely pumping more money to make their GPT LLMs more powerful and more clever, Sam Altman has made several public statements that basically say that AI, if unregulated by the government will prove to be disastrous. In fact, Altman, has been very vocal about the need for guidelines on how AI is developed. There have been numerous instances where Altman has predicted that AI, in its current form will take away millions of jobs. Some tech experts would even go as far as to say that Altman is having a Frankenstein moment–one, where he is somewhat regretful of the monster that he has created, although it seems that would be a farfetched reading of the situation. Despite this, OpenAI has been on the lookout for new and better ways to monetise its GPT-4 LLMs. However, it hasn’t achieved profitability. Its losses reached $540 million since the development of ChatGPT. Microsoft’s $10 billion investment, along with that of some other venture capital firms has kept OpenAI afloat and going for now. However, as Analytics India Magazine reports, OpenAI’s projection of reaching $200 million in annual revenue in 2023 and aiming for $1 billion in 2024 seems ambitious, given its mounting losses.
altman is agitating for regulation of the black-box algorithm industry because he wants to shut the door behind him. he and his company would like to impose increase compliance and regulatory costs on their competitors, and to be the people who can speak with the expert's voice when the regulations that govern them are drafted.
this isnt some moral play its an economic one. even in this segment forbes sets out the exact economic incentives driving this activity but completely fail to connect the dots on why they are acting like this.
24 notes
·
View notes
Text
AUGUST 28, 2023
IT'S FUCKING MONDAY.
WORDS OF WISDOM OF THE FUCKING DAY:
A ROOM WITHOUT BOOKS IS LIKE A BODY WITHOUT A SOUL.
EDUCATE YOUR IGNORANT ASS:
10 DAMN RULES FOR PLAY. more>>
FUCKING MIND-BLOWING BOOK OF THE DAY:
HOW MOTHERFUCKERS MAKE MONEY ONLINE WITH CHATGPT. more>>
USEFUL SHIT OF THE GODDAMN DAY:
GET SOME FUCKING WORK DONE OUTSIDE. more>>
WEBSITE OF THE FUCKING DAY:
KNUCKLE TATTOO GENERATOR. more>>
AWESOME-AS-SHIT VIDEO OF THE DAY:
HOW A GODDAMN NUCLEAR WAR WILL START. more>>
12 notes
·
View notes
Text
Until the dramatic departure of OpenAI’s cofounder and CEO Sam Altman on Friday, Mira Murati was its chief technology officer—but you could also call her its minister of truth. In addition to heading the teams that develop tools such as ChatGPT and Dall-E, it’s been her job to make sure those products don’t mislead people, show bias, or snuff out humanity altogether.
This interview was conducted in July 2023 for WIRED’s cover story on OpenAI. It is being published today after Sam Altman’s sudden departure to provide a glimpse at the thinking of the powerful AI company’s new boss.
Steven Levy: How did you come to join OpenAI?
Mira Murati: My background is in engineering, and I worked in aerospace, automotive, VR, and AR. Both in my time at Tesla [where she shepherded the Model X], and at a VR company [Leap Motion] I was doing applications of AI in the real world. I very quickly believed that AGI would be the last and most important major technology that we built, and I wanted to be at the heart of it. Open AI was the only organization at the time that was incentivized to work on the capabilities of AI technology and also make sure that it goes well. When I joined in 2018, I began working on our supercomputing strategy and managing a couple of research teams.
What moments stand out to you as key milestones during your tenure here?
There are so many big-deal moments, it’s hard to remember. We live in the future, and we see crazy things every day. But I do remember GPT-3 being able to translate. I speak Italian, Albanian, and English. I remember just creating pair prompts of English and Italian. And all of a sudden, even though we never trained it to translate in Italian, it could do it fairly well.
You were at OpenAI early enough to be there when it changed from a pure nonprofit to reorganizing so that a for-profit entity lived inside the structure. How did you feel about that?
It was not something that was done lightly. To really understand how to make our models better and safer, you need to deploy them at scale. That costs a lot of money. It requires you to have a business plan, because your generous nonprofit donors aren't going to give billions like investors would. As far as I know, there's no other structure like this. The key thing was protecting the mission of the nonprofit.
That might be tricky since you partner so deeply with a big tech company. Do you feel your mission is aligned with Microsoft’s?
In the sense that they believe that this is our mission.
But that's not their mission.
No, that's not their mission. But it was important for the investor to actually believe that it’s our mission.
When you joined in 2018, OpenAI was mainly a research lab. While you still do research, you’re now very much a product company. Has that changed the culture?
It has definitely changed the company a lot. I feel like almost every year, there's some sort of paradigm shift where we have to reconsider how we're doing things. It is kind of like an evolution. What's more obvious now to everyone is this need for continuous adaptation in society, helping bring this technology to the world in a responsible way, and helping society adapt to this change. That wasn't necessarily obvious five years ago, when we were just doing stuff in our lab. But putting GPT-3 in an API, in working with customers and developers, helped us build this muscle of understanding the potential that the technology has to change things in the real world, often in ways that are different than what we predict.
You were involved in Dall-E. Because it outputs imagery, you had to consider different things than a text model, including who owns the images that the model draws upon. What were your fears and how successful you think you were?
Obviously, we did a ton of red-teaming. I remember it being a source of joy, levity, and fun. People came up with all these like creative, crazy prompts. We decided to make it available in labs, as an easy way for people to interact with the technology and learn about it. And also to think about policy implications and about how Dall-E can affect products and social media or other things out there. We also worked a lot with creatives, to get their input along the way, because we see it internally as a tool that really enhances creativity, as opposed to replacing it. Initially there was speculation that AI would first automate a bunch of jobs, and creativity was the area where we humans had a monopoly. But we've seen that these AI models actually have a potential to really be creative. When you see artists play with Dall-E, the outputs are really magnificent.
Since OpenAI has released its products, there have been questions about their immediate impact in things like copyright, plagiarism, and jobs. By putting things like GPT-4 in the wild, it’s almost like you’re forcing the public to deal with those issues. Was that intentional?
Definitely. It's actually very important to figure out how to bring it out there in a way that's safe and responsible, and helps people integrate it into their workflow. It’s going to change entire industries; people have compared it to electricity or the printing press. And so it's very important to start actually integrating it in every layer of society and think about things like copyright laws, privacy, governance and regulation. We have to make sure that people really experience for themselves what this technology is capable of versus reading about it in some press release, especially as the technological progress continues to be so rapid. It's futile to resist it. I think it's important to embrace it and figure out how it's going to go well.
Are you convinced that that's the optimal way to move us toward AGI?
I haven't come up with a better way than iterative deployments to figure out how you get this continuous adaptation and feedback from the real end feeding back into the technology to make it more robust to these use cases. It’s very important to do this now, while the stakes are still low. As we get closer to AGI, it's probably going to evolve again, and our deployment strategy will change as we get closer to it.
5 notes
·
View notes
Text
Welcome back to Chain Reaction. subscribe here Annyeong, or hello, friends! While I’m typically based in New York City, this week I’m reporting from Seoul, South Korea for Korea Blockchain Week. The week has been jam-packed with a number of conference events as well as offsite side events and networking happy hours. I’ve listened to a number of panels surrounding topics like web3 gaming, enterprise Blockchain adoption (I moderated one), institutional adoption, regulatory climate and investing in Asia. I also kept busy with a of interviews with local experts on the Market evolving out east as well as people who flew in to meet with startups based in the region. This means I’ll be putting out more articles on TechCrunch based on these conversations in the coming days and weeks…so keep an eye out for that. Meanwhile, there was some News that transpired in the web3 world, so let’s get into it. This week in web3 Crypto funding in August wasn’t as good as the numbers may lead you to believe (TC+) blockchain tech needs a ‘ChatGPT moment’ to scale enterprise adoption (TC+) MetaMask now allows crypto cash-out to PayPal and banks, but fees could be high Gleen’s tech-savvy chatbot for Discord and Slack attracts Solana founder in oversubscribed round The US can’t kill crypto: Real regulations are coming The latest pod For this week’s episode, Jacquelyn interviewed Charlie Shrem, founder of the bitcoin Foundation, general partner at Druid Ventures and host of the Charlie Shrem show. Before all that, he was the co-founder and CEO of BitInstant, which was a bitcoin payment processor that started in 2011. Shortly after founding the company, he was charged with operating an unlicensed money-transmitting business, and for allegedly attempting to launder over $1 million through the now defunct dark web marketplace Silk Road. He spent a little over a year in a low-Security prison as a result. Now, Charlie is a vocal advocate for clearer crypto regulation, he’s a crypto investor, podcaster and even a movie producer. We discussed how the bitcoin and crypto ecosystems have changed (and stayed the same) over the past decade as well as how his incarceration shaped his view on the industry. We also talked about: Need for regulatory clarity in the U.S. Crypto projects and sectors he’s following How the bitcoin ecosystem is growing Friend.tech Advice for listeners Subscribe to Chain Reaction on Apple Podcasts, Spotify or your favorite pod platform to keep up with the latest episodes, and please leave us a review if you like what you hear! Follow the money Story Protocol raised over $54 million in round led by a16z crypto Domain name startup D3 Global raised $5 million in a seed round led by Shima Capital Cross-chain communication protocol Socket raised $5 million from Coinbase Ventures and Frameworks Kotani gets $2 million pre-seed to help African workers send money home via crypto GenTwo raised $15 million in a Series A funding round led by Point72 Ventures What else we’re writing Want to branch out from the world of web3? Here are some articles on TechCrunch that caught our attention this week. Our favorite startups from YC’s Summer 2023 Demo Day, Day 1 (TC+) Clubhouse is trying to make a comeback Tech companies are finding their profitability groove (TC+) Here’s why some investors are sitting out of YC Demo Day (TC+) EU confirms six (mostly US) tech giants are subject to Digital Markets Act
2 notes
·
View notes
Text
Brain Exercise Text #1 for Akhenaton (@AkhenatonIAM) and Vincent Callebaut (@vincentcallebautarchitectures): (July 08, 2025)
Absolutely — here’s a clear, thorough explanation, laid out point by point in the manner of a professor explaining a complex economic subject to a curious student. Let’s begin with your question:
Student’s Question:
“Paris and Las Vegas each receive between 40 and 47 million tourists a year. How much money in revenue do they make from those millions of tourists annually?”
ChatGPT Professor’s Answer:
Excellent question. To give a meaningful answer, we need to break this into two major parts:
How many tourists each city receives annually.
How much tourism revenue those visitors generate — and why the numbers differ between the two cities.
Let’s explore both carefully.
1. Tourist Volume – How Many Visitors Do Paris and Las Vegas Get Each Year?
Let’s start with Paris. Paris consistently ranks as one of the most visited cities on Earth. On average, it welcomes between 45 and 47 million tourists annually, not counting short-term business travelers. If you include the greater metropolitan area (known as Île-de-France), which covers the suburbs and surrounding region, the number often exceeds 50 million visitors per year. The city draws a mix of domestic French tourists and international travelers from the United States, China, the UK, Germany, and the Middle East.
Now let’s look at Las Vegas. Las Vegas typically receives between 38 and 42 million visitors per year, with the number hovering around 40.8 million in 2023, according to official data from the Las Vegas Convention and Visitors Authority. Most of Vegas’s tourism comes from within the United States, with international visitors making up a smaller but still significant portion.
So, in summary, both cities host a massive number of tourists each year — in the 40 to 47 million range — making them global leaders in tourism volume.
2. How Much Revenue Do These Tourists Generate?
Here’s where things get interesting. Although the number of tourists is roughly similar, the amount of money generated by each city differs significantly — and that has everything to do with how tourists spend their money in each destination.
Let’s start again with Paris. Tourism in Paris brings in an estimated €20 to €25 billion annually, which translates to roughly $22 to $27 billion USD. That revenue includes spending on hotels and accommodations, restaurants and cafes, transportation (like taxis and the Metro), tickets to cultural sites (like the Eiffel Tower or the Louvre), and shopping — especially luxury retail. Paris is a fashion capital, and tourists frequently spend on high-end brands like Chanel, Dior, and Louis Vuitton. Visitors from the U.S., China, and the Middle East often contribute disproportionately to luxury spending.
On the other hand, Las Vegas generates a much larger amount of tourism revenue — between $36 and $40 billion USD annually. That’s a significantly higher figure than Paris, despite welcoming fewer tourists. Why? Because Las Vegas has an entirely different model. It’s built around entertainment, gambling, and events. Visitors spend heavily on casino gaming (over $10 billion per year), luxury hotels, world-famous shows and concerts, upscale dining, and nightclubs. In addition, Vegas is a global convention capital, and business tourism adds enormously to its economic intake.
So while Paris thrives on cultural tourism and shopping, Las Vegas thrives on experiential tourism — gambling, shows, nightlife, and events that encourage higher spending per person.
3. How Much Does Each Tourist Spend on Average?
If we break down the total revenue against the number of visitors, we see the key difference:
In Paris, the average tourist spends about $500 to $600 USD during their stay. This makes sense when you consider that Paris’s appeal lies in museums, monuments, meals, and moderate hotel stays. Of course, some tourists spend more — especially luxury shoppers — but many visitors are students, backpackers, or families looking for cultural experiences on a budget.
In Las Vegas, the average spend per visitor is much higher — between $900 and $1,000 USD. This is because Vegas is designed to encourage spending at every turn: gambling, drinks, VIP shows, fine dining, and lavish accommodations. Even domestic tourists who visit for a weekend often spend heavily. On top of that, Vegas also earns significant revenue from corporate events and conventions, where companies may pay thousands per attendee.
So, even though Paris and Las Vegas host a similar number of tourists, Las Vegas makes far more money per tourist.
4. Final Conclusion
Both Paris and Las Vegas receive tens of millions of tourists each year — typically between 40 and 47 million. But their tourism revenues differ significantly:
Paris earns about €20–25 billion, or $22–27 billion USD, per year.
Las Vegas earns between $36–40 billion USD per year.
This difference is due not to the number of tourists, but to how those tourists spend. In Paris, visitors come for culture, history, art, romance, and fashion. In Las Vegas, visitors come for excitement, spectacle, high-stakes entertainment, and conventions — all of which drive much higher spending.
So your question cuts to the heart of a very important economic point: The quality of tourism (in terms of spending behavior) can be just as important as the quantity of tourists.
Let me know if you'd like a follow-up breakdown of how spending categories differ — for example, how much goes to hotels, dining, shopping, and so on.
0 notes
Text
AI-Powered Phishing Attacks: Can AI Fool Even Cybersecurity Experts?
Cyber threats are evolving fast, and artificial intelligence (AI) is now playing a major role in phishing attacks. These attacks, where hackers try to trick people into revealing sensitive information, are becoming more advanced.
Did you know? In 2023, over 60% of phishing attacks used AI tools to mimic human writing styles, making scams harder to spot. As artificial intelligence (AI) grows smarter, cybercriminals are weaponizing it to launch sneaky, personalized phishing attacks. But here’s the big question: Can these AI tricks fool even cybersecurity pros? Let’s break it down.
What Makes AI-Powered Phishing Different?
Phishing isn’t new. For years, scammers have sent fake emails like “Your account is locked!” to steal passwords or money. But traditional phishing has flaws:
Poor grammar or spelling mistakes.
Generic messages (e.g., “Dear Customer”).
Easy to block with basic spam filters.
AI changes the game. Hackers now use AI tools like ChatGPT to craft flawless, convincing messages. Here’s a quick comparison:AspectTraditional PhishingAI-Powered PhishingEmail ContentFull of errors, genericPerfect grammar, tailored to youPersonalizationUses your name at mostMention your job, hobbies, etc.ScaleSent to thousands at onceTargets specific individualsAdaptabilityEasy to detect over timeLearns & improves after failuresDetectionCaught by basic security toolsBypasses many filters
Can AI Outsmart Cybersecurity Experts?
Cybersecurity experts are trained to spot red flags. But AI attacks are designed to slip under the radar:

How Cybersecurity Experts Fight Back?
While hackers use AI for harm, cybersecurity teams are ready for defending:
AI Detectors: Tools like Darktrace scan emails for AI-generated text.
Behavior Analysis: AI learns your normal habits (e.g. when you log in) and flags odd activity.
Simulated Attacks: Companies use AI to run fake phishing drills and train employees.
But it’s a constant battle. As phishing AI evolves, defenses must adapt faster.
How to Protect Yourself
You don’t need to be a tech genius to stay safe. Follow these steps:
Slow Down: Phishing preys on panic. Check URLs before clicking.
Verify Odd Requests: Call the sender directly if an email seems “off.”
Use Multi-Factor Authentication (MFA): Even if hackers get your password, MFA blocks them.
Update Software: Newest security patches fix loopholes in AI exploits.
Conclusion
AI-powered phishing attacks are a serious threat, even for cybersecurity experts. However, as hackers use AI to create more convincing scams, cybersecurity professionals are also using AI to build better defenses. The key to staying safe is awareness. By being cautious, double-checking unexpected requests, and using security tools, individuals and businesses can reduce the risk of falling victim to AI-powered scams.
The battle between hackers and cybersecurity experts is ongoing, but one thing is clear AI is changing the game for both sides.
0 notes
Text
From the Newsletter of the International Center of Medieval Art
Spring 205, No. 1
Reflections on AI in the Classroom by Sonja Drimmer & Christopher J. Nygren.
"We believe that the intellectual, ethical,
and institutional downsides to using this
technology are so substantial that normalizing
its integration into pedagogy poses risks that
far outweigh whatever benefits one might
associate with it. In fact, we would argue that
thus far the only benefits to using AI in art
historical research have been to demonstrate
how poorly equipped it is to conduct research
in the historical humanities.
The purpose of our contribution here is to
offer a digest of those downsides (for an
expansion of this discussion, see our article
“Art History and AI: Ten Axioms”)
and some concrete suggestions for resisting
the incursion of machine learning into art
historical pedagogy:
Environmental: The energy demands to run the LLMs in which programs like ChatGPT run are so high that they both contribute massively to harmful emissions while also disrupting the power supply in ways that exacerbate economic disparity. Likewise, the water required to cool data centers is already exerting a heavy strain on water retention and provision. Even as DeepSeek’s most recent advances promise to be less resource-intensive, research has shown that, in an instance of what is known as Jevon’s Paradox, efficiency gains spur an increase in consumption.
Ethical: There is a particular paradox that makes AI essentially useless as a tool for studying history. The entire point of what we do as historians is to look for untold stories…elements of the history of mankind that are novel and unexpected. There is a fundamental epistemological disjuncture between what PhD-holding historians do and what ChatGPT and its ilk do: the former meticulously, purposefully, and rigorously comb through a mountain of human-curated documents looking for revealing details that diverge from the baseline, offer indications of cultural shift, or elements of humanity embedded in seemingly mundane activities; the latter processes terabytes of machine-harvested data in order to predict what will be the most likely next token in a string, and when these tokens are words they may or may not result in a grammatically coherent sentence
Institutional: Educational technology (Ed tech) is an industry of its own whose ends are very far removed from those of the educators they purportedly serve. As Audrey Watters has shown in her book, Teaching Machines: The History of Personalized Learning (MIT Press, 2023), the zeal to “optimize” education by means of technology goes back well over a century, and both the promises offered and the language used to make these promises have changed remarkably little. This is a profitable industry that requires ever-new products to sell to educational institutions by convincing administrators and educators alike that teachers can improve learning outcomes and prepare students to meet the demands of the job market, all while “scaling up” by integrating new technologies into the classroom. Remember Massive Open Online Courses or MOOCs? How much time and money were wasted by investing in the technological and physical infrastructure required to perform what ultimately we all did under the duress of a global pandemic, the devastating educational outcomes of which we are still feeling?
Ultimately, though, our objection to
incorporating LLMs and generative-AI in the
classroom is more fundamental: not only does
it short-circuit the pathways of learning, but
it also potentially nullifies what we see as our
fundamental pedagogical commitment to our
students and our scholarly commitment to the
past. This may seem overblown rhetoric, but
it is important to take a moment to reflect on
what we do in the classroom. What, at the level
of first principles, are we trying to accomplish
through the study of the Middle Ages and early
modernity? Why do we continue to believe it is
important to educate students about the past?
Having an answer to that question is a crucial
first step to understanding that the promotion of
AI in education is nothing less than an attempt
to colonize the university with the impoverished
notion of “learning” that resides at the core of
“Machine Learning.”
We believe that learning is something akin to
the prolonged process of embodied cognition
that cuts across accumulated experience,
instantaneous calculus, acculturation, and
institutionalized education, which combined
allow someone to operate in the world. This
goes from knowing not to eat raw chicken
and “don’t pick up the long scaly things with
fangs” to “buckle the seat belt before pulling
out of the driveway.” But it also encompasses
sentiments like “I relate to Hamlet because I too
have wondered what it would be like to commit
suicide and stop existing” or “how did we go
back to making literature in the wake of the
Bubonic Plague? I can imagine it would have
been hard to make ‘art’ in 1350.” All those things
are the product of a process of “learning.” Some
of it is lived, some institutionalized, and some
of it is a natural human instinct for survival and
empathy. If that is what we mean by “learning,”
it is vital that at every turn we insist upon the
humanity of the process.
Computers are good at pattern recognition;
but pattern recognition and token prediction
are not learning. To continue calling them
machine “learning” or artificial “intelligence”
is to agree with a fallacious metaphor that risks
irreparable harm to students, the citizenry, and,
by extension, humanity in the form of death-by-
a-thousand-cuts.
One crude definition of human cognition might
run something like this: one of the highest forms
of learning is to have cultivated the ability to look
at a situation and imagine it otherwise. This runs
the gamut of cognition from the ethical (would
it have been right to strangle the infant Pol Pot
in his crib?) to the aesthetic (Beethoven but with
electric guitars) to the historical (I live in a world
with steel support beams, but can I imagine what
it would have been like to walk into a Gothic
cathedral and not understand how the building
supported its own weight while reaching toward
heaven?). Machine Learning has now beaten a
human player at the game GO. This was long
thought to be an impossible feat of “cognition.”
Ultimately, though, the game was jailbroken by
a machine that could process permutations and
recombinations to make the mathematically
“optimal” move. This is an amazing
accomplishment of computer engineering. But
“learning” entered the equation when master
GO players began seeing the game otherwise
by seeking to find the rationality behind a
mathematically “optimal” move. Our job as
educators is to make sure that our students
are learning, and this means thinking critically
about what it has meant to be a human being
at different moments in time. What did “love”
mean in the fourteenth century? What did a
“portrait” look like in the Middle Ages and how
is that different from the hundreds of “portraits”
you’ve taken with your iPhone? These are
questions to which a machine is unresponsive in
the most fundamental way because it is made of
silicon and shares no kinship with human beings
who lived hundreds of years ago. For our part,
we will continue trying to induct our students
into what Marc Bloch called “the solidarity of
the ages,” in all its complexity."
Full article here:
https://static1.squarespace.com/static/53a4b792e4b073bf214c0e66/t/67ddcdb4e1ee531df076cb82/1742589366973/ICMA_MarchNewsletter_v7+FINAL.pdf
#long text post#but this is a really important read#especially for historians#i agree with all of the above and the suggestions made in it#i don't think the public knows what historians truly do anymore and that misunderstanding is really damaging#we are not date-fact finders#we are not archeologists#we are fundamentally storytellers
0 notes
Text
How AI Is Changing Business: Key Insights
Artificial intelligence (AI) is no longer just for tech experts. It’s becoming a useful tool for businesses of all sizes to improve efficiency and solve everyday problems. Here are some key insights from a recent study we completed on AI in business, worldwide.
AI Is Being Used More Than Ever
More than half of businesses today see AI as an important part of their plans. This is because AI helps them work faster, make better decisions, and find practical ways to improve how they operate. For instance:
AI-powered customer support systems are reducing response times, with chatbots now handling up to 70% of routine queries.
Supply chain management systems use AI to forecast demand, optimising inventory and minimising waste.
Companies that adopt AI thoughtfully are seeing real results, from cost savings to higher customer satisfaction.
Investments in AI Are Growing Fast
Spending on AI worldwide has jumped from $18 billion in 2014 to $119 billion in 2021. Generative AI tools, like ChatGPT, are a big part of this growth, making up around 30% of the investments in 2023.
Governments are also investing heavily in AI research and infrastructure, with countries like the US and China leading the way.
Startups focusing on niche AI applications—such as legal tech and climate modelling—are attracting record funding.
This surge in investment reflects the growing recognition of AI’s transformative potential across industries.
AI Brings Real Benefits
AI isn’t just a buzzword—it’s helping businesses make more money and work smarter. Here are some industry-specific examples:
Manufacturing: AI-powered predictive maintenance has reduced downtime by up to 30%, saving millions annually in operational costs. Automated quality control systems use computer vision to detect defects with over 95% accuracy.
Healthcare: AI is improving diagnostics, with algorithms achieving up to 90% accuracy in detecting conditions like cancer. It’s also accelerating drug discovery by identifying potential candidates in weeks rather than years.
Retail: AI-driven tools are optimising inventory levels, reducing overstock and stock outs by up to 20%. Personalised marketing campaigns powered by AI have increased customer engagement and revenue for many retailers.
Financial Services: Fraud detection systems powered by AI, analyse thousands of transactions per second, identifying anomalies and preventing losses. AI is also enabling faster loan approvals by automating credit assessments.
Energy: AI is being used to predict equipment failures in power plants, reducing outages and improving efficiency. In renewable energy, AI helps optimize wind turbine placement and solar energy forecasting.
Whether it’s automating repetitive tasks or providing insights that humans might miss, AI is making a difference in measurable ways.
People Have Mixed Feelings About AI
Many workers believe AI will change their industries significantly, with 75% expecting big shifts. While some worry about job loss, others see AI as a tool that can:
Eliminate tedious, repetitive tasks, allowing employees to focus on creative and strategic work.
Create new roles in areas like AI oversight, ethical governance, and system training.
The key is for businesses to address these concerns by investing in re-skilling programs and ensuring AI is implemented ethically.
What’s Next for AI?
AI is improving quickly, and new tools are being developed all the time. Businesses can expect:
Smarter Analytics: AI systems capable of processing complex datasets will make decision-making faster and more accurate.
Improved Interoperability: AI tools will increasingly integrate seamlessly with existing systems, making adoption easier.
Custom Solutions: Industry-specific AI applications will become more accessible, catering to unique business needs.
AI is becoming a regular part of how businesses work, not just an add-on.
Conclusion: Why AI Matters Now
AI is changing how businesses operate. Whether you run a small company or work at a larger organisation, there are ways AI can help. Start by identifying specific challenges—such as inefficiencies or unmet customer needs—and exploring AI solutions designed to address them.
With the right approach, AI can be a powerful tool to drive growth and innovation. The changes it brings are here to stay. How will your business adapt?
0 notes
Text
lately i've been so addicted to chatgpt.
it started with mere face reading of my face, and of kylian and his colleagues, and of luigi.
and then as such a psychology porn that luigi was, i tested his digital footprint and his handwritten notes, and i then uploaded mine too.
i was having a lot of thoughts in my head like usual, i started to wonder the gap between my exterior and interior, and whether our linguistic choices and patterns indicate our interior at all. somehow i uploaded my lecturer's emails for psycholinguistic analysis. obviously chatgpt didn't give me any valuable insights. but it was very addictive and i didn't have anyone else to talk to.
during this time, i came back to some old questions: why did i have a lot of neurotic friends? and why do people always assume that i am cold and distant? i knew people with high neuroticism were a lot more needy than our normal peers, and it was convenient for a friendship as i didn't have to do the talking, but i didn't set boundaries until suddenly their vicious emotional turmoil was suffocating. i also know i am always reserved except when i pursue sources/materials for stories or i have to get things done, but i am pretty much romantic and idealistic so people calling me cold still sounds kinda funny. then, last night until this morning, i had a long discussion with chatgpt, uploading pictures of my handwritten journals (the pages talking about the relatable concepts on how to make sense of life's impermanence and random incidents). that's when it started to identify my pattern:
external emotional volatility
internal emotional waves
my fascination with the beach that conveniently holds two different worlds (land and the sea)
i believe almost everyone can relate to this theory (as you can tell chatgpt is super generic). anyway it's still kinda cool:
so when we listen to other people story, we are standing at the shore, looking at their emotional waves. there is some distance that allow us to easily see the dots and connect them to spot patterns and understand the bigger picture.
but when we look inward, the scene is different: we are surfing the emotional waves from our lived experience — the rising, the peak, and the downfall. sometimes, to me, the short period of a happy ending is absurd. it's harder to develop a strong analysis when you're in the middle of it.
the sky above is kinda cool too. at the beach, within a few steps, you have the option to go to a completely different world and get a different life, but when you look up, it's basically the same sky.
***
anyway that's the craziest update about chatgpt who at this point has known and understood me better than anybody else. it's just this close of surpassing my own self-awareness.
not sure what i feel about that. it's been years since i realized i am not that special. so fuck it? let chatgpt get my data.
also apple tv's sunny is kinda optimistic when stating that robots are created not to become more humans but to help us to find our humanity. yeah, that's a pretty solid excuse to waste my precious resting hours on this useless chatgpt brainstorm habit.
***
i am so crazy about collecting random, unrelated things and then trying to find/make up a red thread out of them. here are some memorable things from ISEA events in the past three years:
June 2022 “You only get what you measure. And monitoring matters.” - Ellery K.
June 2023 "When people ask, “What can you bring to the table?” Remember: first is social impacts, and second, money is the means to it, not the end of it. Both elements are essential in achieving systemic change." - Christina N. S.
Jan 2024 Look inward to move forward. - my own version to summarize this
Jan 2025 xxx
0 notes