#Amodei
Explore tagged Tumblr posts
Text

Il capoccia di Anthropic, Dario Amodei.
Di origine italiana, ha fondato insieme con la sorella Daniela l'azienda che ha sviluppato la famiglia di large language model nota come Claude.
È stato il vice-presidente di ricerca in OpenAI e ha lavorato con Google, che ora sta investendo parecchi milioni di dollari in Anthropic. I fratelli Amodei sono conosciuti per il loro approccio etico sulla IA.
Insomma, Darío Amodei, uno che sa quello che dice, dice che tra un paio di anni avremo una super intelligenza artificiale. Qualcosa che va molto oltre l'attuale IA debole.
Ci dobbiamo preoccupare?
Le AI cominceranno davvero a capire di cosa parlano quando danno le loro fulminee risposte?
Cosa succederà adesso?
5 notes
·
View notes
Text
And now, for the sweets!

I do not know what all my clowns are invented for, but I love them all 🤍 . This Amodei lifts the mood of Hyzen . Who has the same brother?
9 notes
·
View notes
Text
Why I’m Feeling the A.G.I.
Here are some things I believe about artificial intelligence: I believe that over the past several years, A.I. systems have started surpassing humans in a number of domains — math, coding and medical diagnosis, just to name a few — and that they’re getting better every day. I believe that very soon — probably in 2026 or 2027, but possibly as soon as this year — one or more A.I. companies will…
#Altman#Amodei#Anthropic AI LLC#Artificial intelligence#Bengio#ChatGPT#Computers and the Internet#Dario#Doomsday#Geoffrey E#Hinton#Innovation#OpenAI Labs#Research#Samuel H#Yoshua
1 note
·
View note
Text
#tiktok#nancy mace#mark amodei#us politics#us government#us house of representatives#us representative#house of representatives#jen kiggans#james comer#andy harris#jason Smith#republicans are domestic terrorists#republicans are evil#republicans are garbage#republicans are the problem#republicans are weird#republican corruption
14 notes
·
View notes
Text

Dario Amodei , director ejecutivo de Anthropic , doctor en biofísica por Princeton y becario postdoctoral en Stanford antes de incorporarse a Silicon Valley, cree que los avances en IA podrían permitir pronto que los humanos dupliquen su vida.
"Si pensamos en lo que podríamos esperar que los humanos logren en un área como la biología dentro de 100 años, creo que duplicar la esperanza de vida no es una locura", declaró durante su intervención en el Foro Económico Mundial 2025 en Davos, Suiza.
Pero ¿Por qué está tan convencido? Te explicamos. 👇
Te contamos más aquí:
5 notes
·
View notes
Text
Artificial General Intelligence: Warnings from the Architects
(Images created with the assistance of AI image generation tools) INTRO TEXT Table of Contents What Is Artificial General Intelligence? The Double-Edged Sword of AGIGlobal Stakes What AI Leaders Are SayingLooking AheadLearn More There is a striking irony in today’s technology landscape: the very people building the most advanced artificial intelligence are also among the loudest voices…
#agi#AI Leadership#anthropic#dario amodei#demis hassabis#google deepmind#meta ai#mustafa suleyman#OpenAI#sam altman#sundar pachai#yann lecun
0 notes
Text
“One, he believes that AI is so scary that only they should do it,” Huang said of Amodei at a press briefing at Viva Technology in Paris. “Two, [he believes] that AI is so expensive, nobody else should do it … And three, AI is so incredibly powerful that everyone will lose their jobs, which explains why they should be the only company building it.”
Chief Delusional Officer said the quiet part out loud: he wants a monopoly.
Basically, every fucking time someone's talking horseshit about AI being "dangerous", needing to be "controlled" or "paying creators for their data being used", the endgame is corporate monopoly on that tech. They'll be paying the creators bubkes, much like Spotify does, constantly moving the goalposts to weasel out of paying anything altogether despite being able to afford it, while people who actually push that shit forward will be banned from doing so.
I mean, fuck. Microsoft would have jack smack shit, and not the whole "ghiblification" bullshit in particular, if there was no R&D done by Tencent and Alibaba on replicating human faces on the fly without a pre-baked dataset. I had an AI-generated profile pic as the God Emperor of Mankind a full year before ChatGPT's big image-generating update. Hell, Google's Imagen is still sucking shit as if they haven't gotten the memo.
So next time, rub both of your brain cells together before supporting this kind of nonsense. Because what you're gonna get won't be a total ban or any justice - it'll be dull, technologically backwards corporate monopoly with low-quality AI content being pushed everywhere by the desperate marketing departments trying to sell something that doesn't work.
#Anthropic#Chief Delusional Officer#Dario Amodei#AI bullshit#corporate bullshit#Nvidia#Jensen Huang#tech#technology#artificial intelligence
0 notes
Text
Lose your job to AI? Then it may be time to channel William Morris.
JB: Hi Claude, I’m a 64 year-old Creative Director in the global marketing industry. I’ve seen many paradigm shifts in my 40+ year career, and have used them each to position myself for success. I saw the entire typesetting industry evaporate overnight when the Macintosh, and Adobe Illustrator hit the scene. I saw TV unseated as the leading medium by the internet, and with it, on-air designers…
#AI job displacement#AI survival guide#Anthropic#arts and crafts movemet#axios#Dario Amodei#Island Housing Trust#Ludite#marxism#post-jobs society#white-collar bloodbath#william morris
0 notes
Text
Amotea: la couture semplice e italiana, romantica eppur attuale

C’è un che di curioso, un pizzico divertente, ed anche profondamente gustoso quando l’esito si rivela fruttuoso, nel momento in cui s’incontrano le sperimentazioni che il lessico dello stile fa per tentare di definire le evoluzioni e sperimentazioni della moda dentro etichette che suggeriscano un ritratto preciso in cui specchiarsi: come nel caso di “easy couture”, la definizione che accompagna le creazioni, le intenzioni e le suggestioni del giovane brand italiano Amotea.

“Easy couture”, ad assaporarne il gusto nel pronunciarla parrebbe quasi una contraddizione in termini, eppure contemplando con la giusta attenzione allacciata alla sensibilità i dettagli che raccontano la storia degli abiti e del desiderio della loro creatrice, Diletta Amodei, da cui han preso vita creativa e anima sofisticata, voilà, ogni apparenza di contrasto si concilia nell’armonia di un marchio che è un piccolo mondo in equilibrio agile tra il fascino per la bellezza classica che non conosce tempo, e la raffinatezza svelta ad essere praticata nelle occasioni buone e belle della vita contemporanea.
A ben vedere anche il nome del brand proviene dalla conciliazione di elementi differenti, o più esattamente è il frutto di una crasi: l’incipit del cognome di Diletta si unisce al nome femminile che avrebbe amato consegnare ad una figlia femmina. E giocando ad ampliare la metafora, anche la nascita del marchio accade in virtù di una conciliazione: tra la passione di Diletta da bambina per la moda, e la scelta adulta di riprendere in mano il sogno di crearla, la moda, realizzato nel 2018 con il suo progetto personale. Amotea, dunque, è la sublimazione della virtù della sintesi, una sorta di esaltazione consapevole del motto “poco ma buono” agganciato alla pratica gentile della bellezza eccellente, e dei sentimenti autentici che da dentro l’animo parlano attraverso gli abiti: come fosse un’armonia musicale il cui spartito è composto da poche note, ma pregiatissime.

C’è l’esattezza della couture nei suoi aspetti esclusivi: il made in Italy dell’eccellenza che inizia dai materiali, come per i motivi floreali tailor made, dove la peonia, regina dell’essenza del brand e simbolo del suo immaginario di eleganza etera e sensuale al tempo stesso, è ideata in esclusiva in collaborazione con i disegnatori Ratti; e come per i bottoni che son preziosi come gioielli, realizzati con l’arte artigiana dall’azienda milanese Ascoli.
C’è la semplicità delle linee che con pochi tratti e accorgimenti attenti, percorrono le linee della femminilità, si trattengono a valorizzare i punti dove essa si concentra, e poi si sciolgono ad accarezzarla con leggerezza.

C’è il romanticismo contemporaneo, che dell’amore personalissimo di Diletta per la sua città, Roma, fa fonte d’ispirazione inestinguibile da cui trarre suggestioni di bellezza, geografie di stile che richiamano i raffinati giardini rigogliosi, volumi scolpiti con tocchi di classe a richiamare il gusto neoclassico, mescolato a guizzi estetici perfettamente contemporanei. Tradotto in creazioni, la collezione è abitata da una manciata ricercata di modelli: c’è l’abito Tea, nato dalla memoria dei giardini di Villa Borghese, che mentre svela una spalla ricopre l’altra con una corta manica a sbuffo, e mentre svela le gambe davanti si scioglie in lunghezza sul retro con una stratificazione di rouches dall’allure principesca, c’è Didi, il completo composto di top e pantaloni svasati sul fondo e proposto in doppia versione, ovvero in pizzo sottile e sensuale e in motivo stampato floreale; c’è Julia, il mini-dress che appaia il tulle nero ai pois rossi oppure illumina la figura nella versione platinum.

E ancora, c’è Clotilde: che dalla versione lunga e fluida dal fascino lievemente retrò, si accorcia e si arricchisce di frange danzanti e di lievi bagliori come nelle notti stellate da vivere appieno lungo l’estate. Ed infine c’è Claire: con la gonna a palloncino, le maniche in tulle sbuffante e il corpetto che disegna il busto, stuzzica la voglia di festa.
Festeggiare l’amore per la bellezza, innanzitutto e sempre.
Silvia Scorcella
{ pubblicato su Webelieveinstyle }
0 notes
Text
Anthropic's stated "AI timelines" seem wildly aggressive to me.
As far as I can tell, they are now saying that by 2028 – and possibly even by 2027, or late 2026 – something they call "powerful AI" will exist.
And by "powerful AI," they mean... this (source, emphasis mine):
In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc. In addition to just being a “smart thing you talk to”, it has all the “interfaces” available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world. It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary. It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use. The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10x-100x human speed. It may however be limited by the response time of the physical world or of software it interacts with. Each of these million copies can act independently on unrelated tasks, or if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.
In the post I'm quoting, Amodei is coy about the timeline for this stuff, saying only that
I think it could come as early as 2026, though there are also ways it could take much longer. But for the purposes of this essay, I’d like to put these issues aside [...]
However, other official communications from Anthropic have been more specific. Most notable is their recent OSTP submission, which states (emphasis in original):
Based on current research trajectories, we anticipate that powerful AI systems could emerge as soon as late 2026 or 2027 [...] Powerful AI technology will be built during this Administration. [i.e. the current Trump administration -nost]
See also here, where Jack Clark says (my emphasis):
People underrate how significant and fast-moving AI progress is. We have this notion that in late 2026, or early 2027, powerful AI systems will be built that will have intellectual capabilities that match or exceed Nobel Prize winners. They’ll have the ability to navigate all of the interfaces… [Clark goes on, mentioning some of the other tenets of "powerful AI" as in other Anthropic communications -nost]
----
To be clear, extremely short timelines like these are not unique to Anthropic.
Miles Brundage (ex-OpenAI) says something similar, albeit less specific, in this post. And Daniel Kokotajlo (also ex-OpenAI) has held views like this for a long time now.
Even Sam Altman himself has said similar things (though in much, much vaguer terms, both on the content of the deliverable and the timeline).
Still, Anthropic's statements are unique in being
official positions of the company
extremely specific and ambitious about the details
extremely aggressive about the timing, even by the standards of "short timelines" AI prognosticators in the same social cluster
Re: ambition, note that the definition of "powerful AI" seems almost the opposite of what you'd come up with if you were trying to make a confident forecast of something.
Often people will talk about "AI capable of transforming the world economy" or something more like that, leaving room for the AI in question to do that in one of several ways, or to do so while still failing at some important things.
But instead, Anthropic's definition is a big conjunctive list of "it'll be able to do this and that and this other thing and...", and each individual capability is defined in the most aggressive possible way, too! Not just "good enough at science to be extremely useful for scientists," but "smarter than a Nobel Prize winner," across "most relevant fields" (whatever that means). And not just good at science but also able to "write extremely good novels" (note that we have a long way to go on that front, and I get the feeling that people at AI labs don't appreciate the extent of the gap [cf]). Not only can it use a computer interface, it can use every computer interface; not only can it use them competently, but it can do so better than the best humans in the world. And all of that is in the first two paragraphs – there's four more paragraphs I haven't even touched in this little summary!
Re: timing, they have even shorter timelines than Kokotajlo these days, which is remarkable since he's historically been considered "the guy with the really short timelines." (See here where Kokotajlo states a median prediction of 2028 for "AGI," by which he means something less impressive than "powerful AI"; he expects something close to the "powerful AI" vision ["ASI"] ~1 year or so after "AGI" arrives.)
----
I, uh, really do not think this is going to happen in "late 2026 or 2027."
Or even by the end of this presidential administration, for that matter.
I can imagine it happening within my lifetime – which is wild and scary and marvelous. But in 1.5 years?!
The confusing thing is, I am very familiar with the kinds of arguments that "short timelines" people make, and I still find the Anthropic's timelines hard to fathom.
Above, I mentioned that Anthropic has shorter timelines than Daniel Kokotajlo, who "merely" expects the same sort of thing in 2029 or so. This probably seems like hairsplitting – from the perspective of your average person not in these circles, both of these predictions look basically identical, "absurdly good godlike sci-fi AI coming absurdly soon." What difference does an extra year or two make, right?
But it's salient to me, because I've been reading Kokotajlo for years now, and I feel like I basically get understand his case. And people, including me, tend to push back on him in the "no, that's too soon" direction. I've read many many blog posts and discussions over the years about this sort of thing, I feel like I should have a handle on what the short-timelines case is.
But even if you accept all the arguments evinced over the years by Daniel "Short Timelines" Kokotajlo, even if you grant all the premises he assumes and some people don't – that still doesn't get you all the way to the Anthropic timeline!
To give a very brief, very inadequate summary, the standard "short timelines argument" right now is like:
Over the next few years we will see a "growth spurt" in the amount of computing power ("compute") used for the largest LLM training runs. This factor of production has been largely stagnant since GPT-4 in 2023, for various reasons, but new clusters are getting built and the metaphorical car will get moving again soon. (See here)
By convention, each "GPT number" uses ~100x as much training compute as the last one. GPT-3 used ~100x as much as GPT-2, and GPT-4 used ~100x as much as GPT-3 (i.e. ~10,000x as much as GPT-2).
We are just now starting to see "~10x GPT-4 compute" models (like Grok 3 and GPT-4.5). In the next few years we will get to "~100x GPT-4 compute" models, and by 2030 will will reach ~10,000x GPT-4 compute.
If you think intuitively about "how much GPT-4 improved upon GPT-3 (100x less) or GPT-2 (10,000x less)," you can maybe convince yourself that these near-future models will be super-smart in ways that are difficult to precisely state/imagine from our vantage point. (GPT-4 was way smarter than GPT-2; it's hard to know what "projecting that forward" would mean, concretely, but it sure does sound like something pretty special)
Meanwhile, all kinds of (arguably) complementary research is going on, like allowing models to "think" for longer amounts of time, giving them GUI interfaces, etc.
All that being said, there's still a big intuitive gap between "ChatGPT, but it's much smarter under the hood" and anything like "powerful AI." But...
...the LLMs are getting good enough that they can write pretty good code, and they're getting better over time. And depending on how you interpret the evidence, you may be able to convince yourself that they're also swiftly getting better at other tasks involved in AI development, like "research engineering." So maybe you don't need to get all the way yourself, you just need to build an AI that's a good enough AI developer that it improves your AIs faster than you can, and then those AIs are even better developers, etc. etc. (People in this social cluster are really keen on the importance of exponential growth, which is generally a good trait to have but IMO it shades into "we need to kick off exponential growth and it'll somehow do the rest because it's all-powerful" in this case.)
And like, I have various disagreements with this picture.
For one thing, the "10x" models we're getting now don't seem especially impressive – there has been a lot of debate over this of course, but reportedly these models were disappointing to their own developers, who expected scaling to work wonders (using the kind of intuitive reasoning mentioned above) and got less than they hoped for.
And (in light of that) I think it's double-counting to talk about the wonders of scaling and then talk about reasoning, computer GUI use, etc. as complementary accelerating factors – those things are just table stakes at this point, the models are already maxing out the tasks you had defined previously, you've gotta give them something new to do or else they'll just sit there wasting GPUs when a smaller model would have sufficed.
And I think we're already at a point where nuances of UX and "character writing" and so forth are more of a limiting factor than intelligence. It's not a lack of "intelligence" that gives us superficially dazzling but vapid "eyeball kick" prose, or voice assistants that are deeply uncomfortable to actually talk to, or (I claim) "AI agents" that get stuck in loops and confuse themselves, or any of that.
We are still stuck in the "Helpful, Harmless, Honest Assistant" chatbot paradigm – no one has seriously broke with it since that Anthropic introduced it in a paper in 2021 – and now that paradigm is showing its limits. ("Reasoning" was strapped onto this paradigm in a simple and fairly awkward way, the new "reasoning" models are still chatbots like this, no one is actually doing anything else.) And instead of "okay, let's invent something better," the plan seems to be "let's just scale up these assistant chatbots and try to get them to self-improve, and they'll figure it out." I won't try to explain why in this post (IYI I kind of tried to here) but I really doubt these helpful/harmless guys can bootstrap their way into winning all the Nobel Prizes.
----
All that stuff I just said – that's where I differ from the usual "short timelines" people, from Kokotajlo and co.
But OK, let's say that for the sake of argument, I'm wrong and they're right. It still seems like a pretty tough squeeze to get to "powerful AI" on time, doesn't it?
In the OSTP submission, Anthropic presents their latest release as evidence of their authority to speak on the topic:
In February 2025, we released Claude 3.7 Sonnet, which is by many performance benchmarks the most powerful and capable commercially-available AI system in the world.
I've used Claude 3.7 Sonnet quite a bit. It is indeed really good, by the standards of these sorts of things!
But it is, of course, very very far from "powerful AI." So like, what is the fine-grained timeline even supposed to look like? When do the many, many milestones get crossed? If they're going to have "powerful AI" in early 2027, where exactly are they in mid-2026? At end-of-year 2025?
If I assume that absolutely everything goes splendidly well with no unexpected obstacles – and remember, we are talking about automating all human intellectual labor and all tasks done by humans on computers, but sure, whatever – then maybe we get the really impressive next-gen models later this year or early next year... and maybe they're suddenly good at all the stuff that has been tough for LLMs thus far (the "10x" models already released show little sign of this but sure, whatever)... and then we finally get into the self-improvement loop in earnest, and then... what?
They figure out to squeeze even more performance out of the GPUs? They think of really smart experiments to run on the cluster? Where are they going to get all the missing information about how to do every single job on earth, the tacit knowledge, the stuff that's not in any web scrape anywhere but locked up in human minds and inaccessible private data stores? Is an experiment designed by a helpful-chatbot AI going to finally crack the problem of giving chatbots the taste to "write extremely good novels," when that taste is precisely what "helpful-chatbot AIs" lack?
I guess the boring answer is that this is all just hype – tech CEO acts like tech CEO, news at 11. (But I don't feel like that can be the full story here, somehow.)
And the scary answer is that there's some secret Anthropic private info that makes this all more plausible. (But I doubt that too – cf. Brundage's claim that there are no more secrets like that now, the short-timelines cards are all on the table.)
It just does not make sense to me. And (as you can probably tell) I find it very frustrating that these guys are out there talking about how human thought will basically be obsolete in a few years, and pontificating about how to find new sources of meaning in life and stuff, without actually laying out an argument that their vision – which would be the common concern of all of us, if it were indeed on the horizon – is actually likely to occur on the timescale they propose.
It would be less frustrating if I were being asked to simply take it on faith, or explicitly on the basis of corporate secret knowledge. But no, the claim is not that, it's something more like "now, now, I know this must sound far-fetched to the layman, but if you really understand 'scaling laws' and 'exponential growth,' and you appreciate the way that pretraining will be scaled up soon, then it's simply obvious that –"
No! Fuck that! I've read the papers you're talking about, I know all the arguments you're handwaving-in-the-direction-of! It still doesn't add up!
280 notes
·
View notes
Text
È un mio testo del 2017, ma non mi pare che ci siano aggiornamenti di rilievo da fare. Solo che quello che prima era un fascismo strisciante, ora marcia alla luce del sole col petto in fuori, un braccio teso e un altro a protezione dei co*l*oni.
Poi, se non la conoscete questa canzone di Fausto Amodei, è il momento di ascoltarla.
Ha più di 50 anni, ma li porta fin troppo bene.
youtube
#fascismo#fascisti#storia#politica#Italia#fratelli d'italia#leghisti#se non li conoscete#amodei#fausto amodei#1972#youtube
11 notes
·
View notes
Text
Multiple Republican lawmakers are voicing concerns about backing a high-profile measure later this week to codify Elon Musk’s DOGE cuts – raising questions about whether it can pass the House at all.
Two Republicans – Reps. Mark Amodei of Nevada and Nicole Malliotakis of New York – separately told CNN they have concerns with the White House’s push to defund the Corporation for Public Broadcasting.
“Still mulling,” Amodei said when asked if he would support the package of cuts. “The impact on local PBS stations appears to be significant.”
Other Republicans have heartburn about how it could cut the Bush-era program, PEPFAR, devoted to fighting HIV and AIDS globally.
“If it cuts PEPFAR like they’re saying it is, that’s not good,” GOP Rep. Don Bacon of Nebraska told CNN last week.
House GOP leaders plan to put the package of cuts, totaling $9.4 billion, on the floor as soon as Thursday, according to two people familiar with the plans.
But Speaker Mike Johnson will need near unanimity in his conference for the package to pass the House, where he can only lose three votes.
Johnson said on Monday that he’s “working on” getting enough votes for the Department of Government Efficiency spending cuts package he hopes to bring to the floor this week.
“The only concern I heard initially was some wanted a little more specificity and detail on what was in the package,” Johnson continued.
Asked how he would persuade members that wanted more specificity in the package, Johnson replied, “I’m gathering up all their questions and we’ll try to get them all answered. I mean, that’s what we do in every piece of legislation.”
If it can survive the House, it will face major obstacles in the Senate. Sen. Susan Collins of Maine told CNN on Monday that she has major misgivings about the global health cuts, including PEPFAR.
65 notes
·
View notes
Text
Here’s what I think is happening. The case for imminent AGI more or less reduces down to the notion that creative problem solving can be commoditized via large model based technologies. Such technologies include language models like the GPT family and Claude, the diffusion models that produce art and others. The thesis is that these models will soon be able to solve difficult problems better than humans ever could. They will be able to do this because of the “bitter lesson” that the “secret to intelligence,” is, in Dario Amodei’s formulation, scaling up simple objective functions by throwing data and compute at them. We will soon live in a world where “geniuses in a datacenter” can conduct fundamental research, solve the aging problem and propel us into a material paradise like that in Iain M. Banks’ Culture novels. Under this theory, we should prioritize building AI over solving other problems because AGI (or whatever you want to call it: Amodei doesn’t like that term) will be a superior and independent means for solving those problems, exceeding the problem solving capacity of mere humans. Thus, for example, both Eric Schmidt and Bill Gates say that we should build lots of power capacity to fuel AI, even if this has short term repercussions for the climate. In Schmidt’s summation, human beings are not going to hit the climate change targets anyway, “because we’re not organized to do it.” Hence, the better bet is to build out the power infrastructure for AI, to build automated systems that are better capable of solving the problems than flawed human social institutions.
There's a weird thing where we've created these text generators and even if they are fiendishly complex and intelligent they still aren't what was imagined previously as "AGI," they aren't building a perfectly updating bayesian model of the world or something that they can be used to change outcomes with ruthless efficiency. And I feel that tech people are still running their "AGI" playbook like they are.
78 notes
·
View notes
Text

Rep. Mark Amodei (R-Nev.), another senior appropriator, had a similar reaction to the White House leaving open the possibility of withholding funding that the Republican-led Congress clears in the coming months. “That’s a funny way to treat your friends,” he said in an interview.
Do repulsive-can politicians really think the toxic false orange idol has “FRIENDS?”
#republicans#got to have friends#crooked donald#toxic orange asshole#congress#maga morons#money#toxic orange false idol#political games our politicians play#boo hoo#you get what you vote for#kiss trump’s ass
15 notes
·
View notes
Text
#tiktok#melanie stansbury#rep stansbury#nevada#public lands#protect our parks#us politics#us government#mark amodei#u.s. house of representatives#nevada representative#democrats
17 notes
·
View notes
Text
"E se Berlino chiama
ditele che s'impicchi:
crepare per i ricchi
no! non ci garba più.
E se la Nato chiama
ditele che ripassi:
lo sanno pure i sassi:
non ci si crede più.
Se la ragazza chiama
non fatela aspettare:
servizio militare
solo con lei farò.
E se la patria chiama
lasciatela chiamare:
oltre le Alpi e il mare
un'altra patria c'è.
E se la patria chiede
di offrirgli la tua vita
rispondi che la vita
per ora serve a te."
Fausto Amodei.
Franco Fortini.
18 notes
·
View notes