#a mathematical theory of communication
Explore tagged Tumblr posts
rwpohl · 1 year ago
Text
Tumblr media
minivac 601, claude shannon 1961
0 notes
Text
I need to start a collection of "math things that really, really sound like wizard and/or cleric things."
Tumblr media Tumblr media Tumblr media
60 notes · View notes
m---a---x · 1 year ago
Text
Tumblr media
Inspired by all the newly created communities i have also created one about the topic closest to my heart: Foundational Mathematics
It is inteded for all types of posts about and from people of all kinds of backgrounds interested in the topic.
Please share with anyone you think might be interested. If you want to be added comment on this post, so I can add you.
26 notes · View notes
leptrois · 7 months ago
Text
-raleous/-rleous/-aleous (suffixes) or parallelous/paralelous (adjectives)
Definition: a modifier expressing a certain gender is equally distant from one another at all points
PT. Definition: a modifier expressing a certain gender is equally distant from one another at all points.
Antiparallel equivalent: -tileous (antiparalelous/antiparallelous)
-pencular (suffix) or pendriclear (adjective)
Definition: a modifier expressing a certain gender is perpendicular (orthogonal/transversal) to another.
PT. Definition: a modifier expressing a certain gender is perpendicular (orthogonal/transversal) to another.
Usage examples: my gender is pendriclear; I'm feeling parallelous today; boypencular, girlpencular, neupencular; maveriquerleous, girlaleous, maleraleous.
These are all up to the user's interpretation. What it means to have an orthogonal gender is descriptively figurative. They could be used in the same way as antigender while not technically that (unless the user wants it to be).
It could also be a gender replication too (such as minigender without the size-specific part).
Comment: the parallelous suffix for aporagender could be inconsistent, but the more harmonious to the user, the better, so I can see them choosing aporarleous or aporaraleous over aporaleous because of apogender implication. However, I think they are allowed to pick what's best for them if it fits.
8 notes · View notes
trivalentlinks · 2 years ago
Text
Let F be a Floer homology theory. For the purposes of this talk, you can assume it's any Floer homology theory you like, and if you don't like any Floer homology theory, then pretend you do like one and assume it's that one
-- a conference speaker
44 notes · View notes
4gravitons · 4 months ago
Text
Bonus Material for "How Hans Bethe Stumbled Upon Perfect Quantum Theories"
Some bonus material for my Quanta piece last week:
I had an article last week in Quanta Magazine. It’s a piece about something called the Bethe ansatz, a method in mathematical physics that was discovered by Hans Bethe in the 1930’s, but which only really started being understood and appreciated around the 1960’s. Since then it’s become a key tool, used in theoretical investigations in areas from condensed matter to quantum gravity. In this post,…
3 notes · View notes
kimblestudies · 2 years ago
Text
would anyone be interested in an upper level mathematics discord? I know there's math and hw servers out there but I was thinking it may be nice to help each other with let's say everything past precal ?
14 notes · View notes
tmarshconnors · 8 months ago
Text
Cryptology: The Science of Secrets
Cryptology, the study of codes and ciphers, has captivated me for years. It combines elements of mathematics, linguistics, and computer science, serving as a crucial part of secure communication in our increasingly digital world.
From ancient times, when messages were hidden through simple substitution ciphers, to modern encryption methods used to protect sensitive information, cryptology plays a vital role in safeguarding privacy and security.
As we delve into cryptology, we uncover the challenges of decoding messages, the history of famous codes, and the impact of cryptography on national security. It’s an intriguing field that highlights the importance of communication and the lengths we go to protect our secrets.
1 note · View note
szczekaczz · 2 years ago
Text
i think everything in this world is deeply fascinating and i should be immortal to have enough time to learn and study it all
0 notes
luckyladylily · 6 months ago
Text
So like, transandrophobia.
To start this out, I am a trans woman, been around in the queer community for a while. I'm also bisexuality, polyamorous, disabled, and aromantic, and I think these other parts of my identity and the crap I've caught over the years for them heavily informs how I analyze something like transandrophobia. My wife is also asexual, so that plays a part in it too.
So every group of marginalized people has their own unique experiences and problems. It's more of a rule than something we've mathematically demonstrated, but as far as these things go it's ridiculously well established, and personally every time I've done even a basic dive into the issues faced by a marginalized group it's been self evident. I could easily list a dozen groups ranging from racial minorities to different kinds of disabled people to different queer identities and analyze their social issues but let's be real, this is pretty well established theory, anyone who needs me to do that is not really interacting with good faith. This is one of the big reasons we talk to people about their own experiences and groups, we cannot reasonably extrapolate the experiences of others from our own.
So like trans men and trans mascs and anyone else that falls under that umbrella has their unique experiences. The idea that we would even question this is weird to me? Like I can't even imagine the kind of evidence someone would need to present to me to change my mind, and given the pattern of the queer community to be shitty in exactly this way to people in our community, yeah that is not happening.
Therefore, we are taking it for granted that the trans men/masc/related umbrella has their own things going on like everyone else ever, and I don't understand how someone acting in good faith can try to claim otherwise unless they are young or otherwise very inexperienced with such things.
The next point of contention seems to be the name, and I gotta be real I don't care and I don't understand why other people do. I've read all sorts of arguments against the word transandrophobia and the majority of them seem to be rooted in a misunderstanding of intersectionality, and even then it's like there is such a thing where people get so mired in theory that they miss the forest for the trees.
Perhaps more important to me, getting overly worked up about something as unimportant as the precise term is... weird. Like exclusionists hating on bi and ace people weird. I remember what it was like a decade ago when exclusionists were trying to police the words of bi women, and five years ago when ace and aro people were under constant attack under the pretense that our language was harmful for some reason or other. You are going to have to work very, very, very hard to convince me that any bickering over language as it relates to transandrophobia is not just more of the same.
Next, "transandrobros hate trans femmes" and similar stuff. I've seen the callout posts and found them completely unconvincing. Again, they read a lot like the old "ace people hate lesbians!" posts I used to see. I'm not convinced that the individuals involved were a problem, I am certainly not able to extrapolate a problem to the rest of the group.
Finally, there is this idea that "maleness is not a vector for oppression" and this invalidates something about the whole transandrophobia thing, ranging from the entire concept of trans men experiencing prejudice to something about language being imprecise all the way to "This is fascist shit, omg these people are basically nazis" depending on who says it. I'm not going to touch any of that and just look at the underlying logic.
This is based off a misunderstanding of intersectionality theory. Many people think of intersectionality as defining intersecting prejudice, like a ven diagram, such that transmisogyny is the intersection of transphobia and misogyny. This is incorrect. Intersectionality defines unique prejudice experienced by people with intersecting identities. Instead of a transmisogyny as the overlap of transphobia and misogyny, imagine adding a third circle that overlaps both but also has its own areas covered by neither.
Applied to transandrophobia, even if we assume maleness is not a vector for oppression, there is no reason to assume that the intersection of maleness with a marginalized identity doesn't result in new issues. Imagine that 3 circle venn diagram that represents misogyny, transphobia, and transmisogyny. Even if you remove the misogyny circle there is still plenty of ground covered by the transmisogyny circle.
This just isn't a valid criticism. It is a pure theory approach based on a flawed reading of theory.
So in summary:
Everyone has their unique shit going on and I've seen no convincing evidence that trans men, mascs, etc. Are the exception.
I not seen any convincing argument that the word itself is bad.
I've not seen any convincing evidence that there is some epidemic of transandrophobia truthers hating and harassing trans femmes on scales higher than normal background queer infighting.
The most coherent objection to transandrophobia I've seen is categorically incorrect and based on a fundamental misunderstanding of intersectionality theory.
I would like to remind everyone at this point I am a trans woman, part of the group that is supposedly a problem for and I've just not see it at all, to the point where it is kind of weird how intensely some people are pushing this.
I'm not trying to be mean or whatever, I'm sure the distress on display here comes from a real place and real trauma, but I've yet to see anything that makes me think there is substance to the objections to transandrophobia as a concept. It feels and reads like the latest round of queer intracommunity exclusionism, and the fact that this time around I'm not one of the target identities doesn't change that for me.
2K notes · View notes
nostalgebraist · 3 months ago
Text
Anthropic's stated "AI timelines" seem wildly aggressive to me.
As far as I can tell, they are now saying that by 2028 – and possibly even by 2027, or late 2026 – something they call "powerful AI" will exist.
And by "powerful AI," they mean... this (source, emphasis mine):
In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc. In addition to just being a “smart thing you talk to”, it has all the “interfaces” available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world. It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary. It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use. The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10x-100x human speed. It may however be limited by the response time of the physical world or of software it interacts with. Each of these million copies can act independently on unrelated tasks, or if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.
In the post I'm quoting, Amodei is coy about the timeline for this stuff, saying only that
I think it could come as early as 2026, though there are also ways it could take much longer. But for the purposes of this essay, I’d like to put these issues aside [...]
However, other official communications from Anthropic have been more specific. Most notable is their recent OSTP submission, which states (emphasis in original):
Based on current research trajectories, we anticipate that powerful AI systems could emerge as soon as late 2026 or 2027 [...] Powerful AI technology will be built during this Administration. [i.e. the current Trump administration -nost]
See also here, where Jack Clark says (my emphasis):
People underrate how significant and fast-moving AI progress is. We have this notion that in late 2026, or early 2027, powerful AI systems will be built that will have intellectual capabilities that match or exceed Nobel Prize winners. They’ll have the ability to navigate all of the interfaces… [Clark goes on, mentioning some of the other tenets of "powerful AI" as in other Anthropic communications -nost]
----
To be clear, extremely short timelines like these are not unique to Anthropic.
Miles Brundage (ex-OpenAI) says something similar, albeit less specific, in this post. And Daniel Kokotajlo (also ex-OpenAI) has held views like this for a long time now.
Even Sam Altman himself has said similar things (though in much, much vaguer terms, both on the content of the deliverable and the timeline).
Still, Anthropic's statements are unique in being
official positions of the company
extremely specific and ambitious about the details
extremely aggressive about the timing, even by the standards of "short timelines" AI prognosticators in the same social cluster
Re: ambition, note that the definition of "powerful AI" seems almost the opposite of what you'd come up with if you were trying to make a confident forecast of something.
Often people will talk about "AI capable of transforming the world economy" or something more like that, leaving room for the AI in question to do that in one of several ways, or to do so while still failing at some important things.
But instead, Anthropic's definition is a big conjunctive list of "it'll be able to do this and that and this other thing and...", and each individual capability is defined in the most aggressive possible way, too! Not just "good enough at science to be extremely useful for scientists," but "smarter than a Nobel Prize winner," across "most relevant fields" (whatever that means). And not just good at science but also able to "write extremely good novels" (note that we have a long way to go on that front, and I get the feeling that people at AI labs don't appreciate the extent of the gap [cf]). Not only can it use a computer interface, it can use every computer interface; not only can it use them competently, but it can do so better than the best humans in the world. And all of that is in the first two paragraphs – there's four more paragraphs I haven't even touched in this little summary!
Re: timing, they have even shorter timelines than Kokotajlo these days, which is remarkable since he's historically been considered "the guy with the really short timelines." (See here where Kokotajlo states a median prediction of 2028 for "AGI," by which he means something less impressive than "powerful AI"; he expects something close to the "powerful AI" vision ["ASI"] ~1 year or so after "AGI" arrives.)
----
I, uh, really do not think this is going to happen in "late 2026 or 2027."
Or even by the end of this presidential administration, for that matter.
I can imagine it happening within my lifetime – which is wild and scary and marvelous. But in 1.5 years?!
The confusing thing is, I am very familiar with the kinds of arguments that "short timelines" people make, and I still find the Anthropic's timelines hard to fathom.
Above, I mentioned that Anthropic has shorter timelines than Daniel Kokotajlo, who "merely" expects the same sort of thing in 2029 or so. This probably seems like hairsplitting – from the perspective of your average person not in these circles, both of these predictions look basically identical, "absurdly good godlike sci-fi AI coming absurdly soon." What difference does an extra year or two make, right?
But it's salient to me, because I've been reading Kokotajlo for years now, and I feel like I basically get understand his case. And people, including me, tend to push back on him in the "no, that's too soon" direction. I've read many many blog posts and discussions over the years about this sort of thing, I feel like I should have a handle on what the short-timelines case is.
But even if you accept all the arguments evinced over the years by Daniel "Short Timelines" Kokotajlo, even if you grant all the premises he assumes and some people don't – that still doesn't get you all the way to the Anthropic timeline!
To give a very brief, very inadequate summary, the standard "short timelines argument" right now is like:
Over the next few years we will see a "growth spurt" in the amount of computing power ("compute") used for the largest LLM training runs. This factor of production has been largely stagnant since GPT-4 in 2023, for various reasons, but new clusters are getting built and the metaphorical car will get moving again soon. (See here)
By convention, each "GPT number" uses ~100x as much training compute as the last one. GPT-3 used ~100x as much as GPT-2, and GPT-4 used ~100x as much as GPT-3 (i.e. ~10,000x as much as GPT-2).
We are just now starting to see "~10x GPT-4 compute" models (like Grok 3 and GPT-4.5). In the next few years we will get to "~100x GPT-4 compute" models, and by 2030 will will reach ~10,000x GPT-4 compute.
If you think intuitively about "how much GPT-4 improved upon GPT-3 (100x less) or GPT-2 (10,000x less)," you can maybe convince yourself that these near-future models will be super-smart in ways that are difficult to precisely state/imagine from our vantage point. (GPT-4 was way smarter than GPT-2; it's hard to know what "projecting that forward" would mean, concretely, but it sure does sound like something pretty special)
Meanwhile, all kinds of (arguably) complementary research is going on, like allowing models to "think" for longer amounts of time, giving them GUI interfaces, etc.
All that being said, there's still a big intuitive gap between "ChatGPT, but it's much smarter under the hood" and anything like "powerful AI." But...
...the LLMs are getting good enough that they can write pretty good code, and they're getting better over time. And depending on how you interpret the evidence, you may be able to convince yourself that they're also swiftly getting better at other tasks involved in AI development, like "research engineering." So maybe you don't need to get all the way yourself, you just need to build an AI that's a good enough AI developer that it improves your AIs faster than you can, and then those AIs are even better developers, etc. etc. (People in this social cluster are really keen on the importance of exponential growth, which is generally a good trait to have but IMO it shades into "we need to kick off exponential growth and it'll somehow do the rest because it's all-powerful" in this case.)
And like, I have various disagreements with this picture.
For one thing, the "10x" models we're getting now don't seem especially impressive – there has been a lot of debate over this of course, but reportedly these models were disappointing to their own developers, who expected scaling to work wonders (using the kind of intuitive reasoning mentioned above) and got less than they hoped for.
And (in light of that) I think it's double-counting to talk about the wonders of scaling and then talk about reasoning, computer GUI use, etc. as complementary accelerating factors – those things are just table stakes at this point, the models are already maxing out the tasks you had defined previously, you've gotta give them something new to do or else they'll just sit there wasting GPUs when a smaller model would have sufficed.
And I think we're already at a point where nuances of UX and "character writing" and so forth are more of a limiting factor than intelligence. It's not a lack of "intelligence" that gives us superficially dazzling but vapid "eyeball kick" prose, or voice assistants that are deeply uncomfortable to actually talk to, or (I claim) "AI agents" that get stuck in loops and confuse themselves, or any of that.
We are still stuck in the "Helpful, Harmless, Honest Assistant" chatbot paradigm – no one has seriously broke with it since that Anthropic introduced it in a paper in 2021 – and now that paradigm is showing its limits. ("Reasoning" was strapped onto this paradigm in a simple and fairly awkward way, the new "reasoning" models are still chatbots like this, no one is actually doing anything else.) And instead of "okay, let's invent something better," the plan seems to be "let's just scale up these assistant chatbots and try to get them to self-improve, and they'll figure it out." I won't try to explain why in this post (IYI I kind of tried to here) but I really doubt these helpful/harmless guys can bootstrap their way into winning all the Nobel Prizes.
----
All that stuff I just said – that's where I differ from the usual "short timelines" people, from Kokotajlo and co.
But OK, let's say that for the sake of argument, I'm wrong and they're right. It still seems like a pretty tough squeeze to get to "powerful AI" on time, doesn't it?
In the OSTP submission, Anthropic presents their latest release as evidence of their authority to speak on the topic:
In February 2025, we released Claude 3.7 Sonnet, which is by many performance benchmarks the most powerful and capable commercially-available AI system in the world.
I've used Claude 3.7 Sonnet quite a bit. It is indeed really good, by the standards of these sorts of things!
But it is, of course, very very far from "powerful AI." So like, what is the fine-grained timeline even supposed to look like? When do the many, many milestones get crossed? If they're going to have "powerful AI" in early 2027, where exactly are they in mid-2026? At end-of-year 2025?
If I assume that absolutely everything goes splendidly well with no unexpected obstacles – and remember, we are talking about automating all human intellectual labor and all tasks done by humans on computers, but sure, whatever – then maybe we get the really impressive next-gen models later this year or early next year... and maybe they're suddenly good at all the stuff that has been tough for LLMs thus far (the "10x" models already released show little sign of this but sure, whatever)... and then we finally get into the self-improvement loop in earnest, and then... what?
They figure out to squeeze even more performance out of the GPUs? They think of really smart experiments to run on the cluster? Where are they going to get all the missing information about how to do every single job on earth, the tacit knowledge, the stuff that's not in any web scrape anywhere but locked up in human minds and inaccessible private data stores? Is an experiment designed by a helpful-chatbot AI going to finally crack the problem of giving chatbots the taste to "write extremely good novels," when that taste is precisely what "helpful-chatbot AIs" lack?
I guess the boring answer is that this is all just hype – tech CEO acts like tech CEO, news at 11. (But I don't feel like that can be the full story here, somehow.)
And the scary answer is that there's some secret Anthropic private info that makes this all more plausible. (But I doubt that too – cf. Brundage's claim that there are no more secrets like that now, the short-timelines cards are all on the table.)
It just does not make sense to me. And (as you can probably tell) I find it very frustrating that these guys are out there talking about how human thought will basically be obsolete in a few years, and pontificating about how to find new sources of meaning in life and stuff, without actually laying out an argument that their vision – which would be the common concern of all of us, if it were indeed on the horizon – is actually likely to occur on the timescale they propose.
It would be less frustrating if I were being asked to simply take it on faith, or explicitly on the basis of corporate secret knowledge. But no, the claim is not that, it's something more like "now, now, I know this must sound far-fetched to the layman, but if you really understand 'scaling laws' and 'exponential growth,' and you appreciate the way that pretraining will be scaled up soon, then it's simply obvious that –"
No! Fuck that! I've read the papers you're talking about, I know all the arguments you're handwaving-in-the-direction-of! It still doesn't add up!
280 notes · View notes
arayapendragon · 6 months ago
Text
the beauty of quantum immortality 🦋
“what happens after i die?”, is a question that has been asked by many throughout the course of history. yet, us humans were never able to find the one true answer to what awaits us once our life in this reality comes to an end. unless...? ;)
this brings forth the concept of quantum immortality, which is a theory stating that our consciousness will continue to experience lifetimes where we are alive, after we “die” in this timeline or reality. Hugh Everett was an american physicist, who proposed the very fundamentals of quantum mechanics in his PhD in the 1950’s. he introduced the idea of quantum events leading to the universe branching into several different timelines, where each timeline represents a different outcome. therefore, if we choose to, we can continue to keep experiencing timelines, or realities, where we survive, thus leading us to believe we are “immortal”. this is known as the Many-Worlds Interpretation (MWI).
circa 1980’s, the physicist and cosmologist Max Tegmark delved again into the concept of quantum immortality, suggesting that we actually die many times in our lifetime, however, our consciousness continues to experience timelines where we are alive.
here’s an analogy of quantum immortality to better help you understand: imagine a person playing a game of russian roulette; hence, the gun leads to different quantum outcomes. - basis the MWI, the gun fires (due to an “upward spin” in a subatomic particle) in some timelines/realities, killing the person. - while in other timelines/realities, it doesn’t fire (due to a “downward spin” in a subatomic particle), so the person survives. from the point of view of the person in the experiment, they would only experience the timelines where they survive.
the very fundaments of quantum immortality and reality shifting intertwine with each other when inspected at a closer level. both focus on the existence of an infinite amount of realities, and seeing as we shift realities for every decision taken, even the smallest ones, it can be deduced that we permashift to either an alternate version of our CR, or any other DR after we experience death in this reality. meaning, we can experience whatever it is we desire after death, there are no limitations or set rules.
to answer the question at the beginning, there is no definite answer to where we go after death. given that the magic systems for this reality are the law of assumption and the law of attraction, it can be said that we will shift wherever we believe or assume we go after death, thus, in a way, demonstrating quantum immortality.
a few resources you can explore that discuss quantum immortality are:
Our Mathematical Universe by Max Tegmark
Parallel Worlds by Michio Kaku
The Fabric of the Cosmos by Brian Greene
Quantum: A Guide for the Perplexed by Jim Al-Khalili
The r/quantumimmortality community on reddit, though note that the users will have differing opinions of the concept, so it is best to conduct your own research.
Tumblr media
322 notes · View notes
mostlysignssomeportents · 9 months ago
Text
There’s no such thing as “shareholder supremacy”
Tumblr media
On SEPTEMBER 24th, I'll be speaking IN PERSON at the BOSTON PUBLIC LIBRARY!
Tumblr media
Here's a cheap trick: claim that your opponents' goals are so squishy and qualitative that no one will ever be able to say whether they've been succeeded or failed, and then declare that your goals can be evaluated using crisp, objective criteria.
This is the whole project of "economism," the idea that politics, with its emphasis on "fairness" and other intangibles, should be replaced with a mathematical form of economics, where every policy question can be reduced to an equation…and then "solved":
https://pluralistic.net/2023/03/28/imagine-a-horse/#perfectly-spherical-cows-of-uniform-density-on-a-frictionless-plane
Before the rise of economism, it was common to speak of its subjects as "political economy" or even "moral philosophy" (Adam Smith, the godfather of capitalism, considered himself a "moral philosopher"). "Political economy" implicitly recognizes that every policy has squishy, subjective, qualitative dimensions that don't readily boil down to math.
For example, if you're asking about whether people should have the "freedom" to enter into contracts, it might be useful to ask yourself how desperate your "free" subject might be, and whether the entity on the other side of that contract is very powerful. Otherwise you'll get "free contracts" like "I'll sell you my kidneys if you promise to evacuate my kid from the path of this wildfire."
The problem is that power is hard to represent faithfully in quantitative models. This may seem like a good reason to you to be skeptical of modeling, but for economism, it's a reason to pretend that the qualitative doesn't exist. The method is to incinerate those qualitative factors to produce a dubious quantitative residue and do math on that:
https://locusmag.com/2021/05/cory-doctorow-qualia/
Hence the famous Ely Devons quote: "If economists wished to study the horse, they wouldn’t go and look at horses. They’d sit in their studies and say to themselves, ‘What would I do if I were a horse?’"
https://pluralistic.net/2022/10/27/economism/#what-would-i-do-if-i-were-a-horse
The neoliberal revolution was a triumph for economism. Neoliberal theorists like Milton Friedman replaced "political economy" with "law and economics," the idea that we should turn every one of our complicated, nuanced, contingent qualitative goals into a crispy defined "objective" criteria. Friedman and his merry band of Chicago School economists replaced traditional antitrust (which sought to curtail the corrupting power of large corporations) with a theory called "consumer welfare" that used mathematics to decide which monopolies were "efficient" and therefore good (spoiler: monopolists who paid Friedman's pals to do this mathematical analysis always turned out to be running "efficient" monopolies):
https://pluralistic.net/2022/02/20/we-should-not-endure-a-king/
One of Friedman's signal achievements was the theory of "shareholder supremacy." In 1970, the New York Times published Friedman's editorial "The Social Responsibility of Business Is to Increase Its Profits":
https://www.nytimes.com/1970/09/13/archives/a-friedman-doctrine-the-social-responsibility-of-business-is-to.html
In it, Friedman argued that corporate managers had exactly one job: to increase profits for shareholders. All other considerations – improving the community, making workers' lives better, donating to worthy causes or sponsoring a little league team – were out of bounds. Managers who wanted to improve the world should fund their causes out of their paychecks, not the corporate treasury.
Friedman cloaked his hymn to sociopathic greed in the mantle of objectivism. For capitalism to work, corporations have to solve the "principal-agent" problem, the notoriously thorny dilemma created when one person (the principal) asks another person (the agent) to act on their behalf, given the fact that the agent might find a way to line their own pockets at the principal's expense (for example, a restaurant server might get a bigger tip by offering to discount diners' meals).
Any company that is owned by stockholders and managed by a CEO and other top brass has a huge principal-agent problem, and yet, the limited liability, joint-stock company had produced untold riches, and was considered the ideal organization for "capital formation" by Friedman et al. In true economismist form, Friedman treated all the qualitative questions about the duty of a company as noise and edited them out of the equation, leaving behind a single, elegant formulation: "a manager is doing their job if they are trying to make as much money as possible for their shareholders."
Friedman's formulation was a hit. The business community ran wild with it. Investors mistook an editorial in the New York Times for an SEC rulemaking and sued corporate managers on the theory that they had a "fiduciary duty" to "maximize shareholder value" – and what's more, the courts bought it. Slowly and piecemeal at first, but bit by bit, the idea that rapacious greed was a legal obligation turned into an edifice of legal precedent. Business schools taught it, movies were made about it, and even critics absorbed the message, insisting that we needed to "repeal the law" that said that corporations had to elevate profit over all other consideration (not realizing that no such law existed).
It's easy to see why shareholder supremacy was so attractive for investors and their C-suite Renfields: it created a kind of moral crumple-zone. Whenever people got angry at you for being a greedy asshole, you could shrug and say, "My hands are tied: the law requires me to run the business this way – if you don't believe me, just ask my critics, who insist that we must get rid of this law!"
In a long feature for The American Prospect, Adam M Lowenstein tells the story of how shareholder supremacy eventually came into such wide disrepute that the business lobby felt that it had to do something about it:
https://prospect.org/power/2024-09-17-ponzi-scheme-of-promises/
It starts in 2018, when Jamie Dimon and Warren Buffett decried the short-term, quarterly thinking in corporate management as bad for business's long-term health. When Washington Post columnist Steve Pearlstein wrote a column agreeing with them and arguing that even moreso, businesses should think about equities other than shareholder returns, Jamie Dimon lost his shit and called Pearlstein to call it "the stupidest fucking column I’ve ever read":
https://www.washingtonpost.com/news/wonk/wp/2018/06/07/will-ending-quarterly-earnings-guidance-free-ceos-to-think-long-term/
But the dam had broken. In the months and years that followed, the Business Roundtable would adopt a series of statements that repudiated shareholder supremacy, though of course they didn't admit it. Rather, they insisted that they were clarifying that they'd always thought that sometimes not being a greedy asshole could be good for business, too. Though these statements were nonbinding, and though the CEOs who signed them did so in their personal capacity and not on behalf of their companies, capitalism's most rabid stans treated this as an existential crisis.
Lowenstein identifies this as the forerunner to today's panic over "woke corporations" and "DEI," and – just as with "woke capitalism" – the whole thing amounted to a a PR exercise. Lowenstein links to several studies that found that the CEOs who signed onto statements endorsing "stakeholder capitalism" were "more likely to lay off employees during COVID-19, were less inclined to contribute to pandemic relief efforts, had 'higher rates of environmental and labor-related compliance violations,”' emitted more carbon into the atmosphere, and spent more money on dividends and buybacks."
One researcher concluded that "signing this statement had zero positive effect":
https://www.theatlantic.com/ideas/archive/2020/08/companies-stand-solidarity-are-licensing-themselves-discriminate/614947
So shareholder supremacy isn't a legal obligation, and statements repudiating shareholder supremacy don't make companies act any better.
But there's an even more fundamental flaw in the argument for the shareholder supremacy rule: it's impossible to know if the rule has been broken.
The shareholder supremacy rule is an unfalsifiable proposition. A CEO can cut wages and lay off workers and claim that it's good for profits because the retained earnings can be paid as a dividend. A CEO can raise wages and hire more people and claim it's good for profits because it will stop important employees from defecting and attract the talent needed to win market share and spin up new products.
A CEO can spend less on marketing and claim it's a cost-savings. A CEO can spend more on marketing and claim it's an investment. A CEO can eliminate products and call it a savings. A CEO can add products and claim they're expansions into new segments. A CEO can settle a lawsuit and claim they're saving money on court fees. A CEO can fight a lawsuit through to the final appeal and claim that they're doing it to scare vexatious litigants away by demonstrating their mettle.
CEOs can use cheaper, inferior materials and claim it's a savings. They can use premium materials and claim it's a competitive advantage that will produce new profits. Everything a company does can be colorably claimed as an attempt to save or make money, from sponsoring the local little league softball team to treating effluent to handing ownership of corporate landholdings to perpetual trusts that designate them as wildlife sanctuaries.
Bribes, campaign contributions, onshoring, offshoring, criminal conspiracies and conference sponsorships – there's a business case for all of these being in line with shareholder supremacy.
Take Boeing: when the company smashed its unions and relocated key production to scab plants in red states, when it forced out whistleblowers and senior engineers who cared about quality, when it outsourced design and production to shops around the world, it realized a savings. Today, between strikes, fines, lawsuits, and a mountain of self-inflicted reputational harm, the company is on the brink of ruin. Was Boeing good to its shareholders? Well, sure – the shareholders who cashed out before all the shit hit the fan made out well. Shareholders with a buy-and-hold posture (like the index funds that can't sell their Boeing holdings so long as the company is in the S&P500) got screwed.
Right wing economists criticize the left for caring too much about "how big a slice of the pie they're getting" rather than focusing on "growing the pie." But that's exactly what Boeing management did – while claiming to be slaves to Friedman's shareholder supremacy. They focused on getting a bigger slice of the pie, screwing their workers, suppliers and customers in the process, and, in so doing, they made the pie so much smaller that it's in danger of disappearing altogether.
Here's the principal-agent problem in action: Boeing management earned bonuses by engaging in corporate autophagia, devouring the company from within. Now, long-term shareholders are paying the price. Far from solving the principal-agent problem with a clean, bright-line rule about how managers should behave, shareholder supremacy is a charter for doing whatever the fuck a CEO feels like doing. It's the squishiest rule imaginable: if someone calls you cruel, you can blame the rule and say you had no choice. If someone calls you feckless, you can blame the rule and say you had no choice. It's an excuse for every season.
The idea that you can reduce complex political questions – like whether workers should get a raise or whether shareholders should get a dividend – to a mathematical rule is a cheap sleight of hand. The trick is an obvious one: the stuff I want to do is empirically justified, while the things you want are based in impossible-to-pin-down appeals to emotion and its handmaiden, ethics. Facts don't care about your feelings, man.
But it's feelings all the way down. Milton Friedman's idol-worshiping cult of shareholder supremacy was never about empiricism and objectivity. It's merely a gimmick to make greed seem scientifically optimal.
Tumblr media
The paperback edition of The Lost Cause, my nationally bestselling, hopeful solarpunk novel is out this month!
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/09/18/falsifiability/#figleaves-not-rubrics/a>
365 notes · View notes
typologyastro · 28 days ago
Text
Astrology Observations
Happy Gemini season ♊️🌈!
These are my personal observations and not true facts 👓.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
📚 Wherever Saturn is, you are denied by the themes of that house. Ex: Saturn in the 4th house natives may feel like they have no family supports, be homeless literally or metaphorically. Saturn in the 7th house natives may be single for a long time. Saturn in the 8th house may help a person avoid injuries and delay death lmao.
📝 6th house placements may struggle with physical and mental health issues 😷. Ex: Vincent Van Gogh (Moon, Jupiter, and Chiron in the 6th house)
✒️ The danger of Neptune is to make you believe there's no danger in order to lure you in or get you addicted to it. An unknowingly happy death.
📐 The danger of Pluto is to make you suspicious and paranoid of danger everywhere, even when there's no danger around. A traumatic death.
💻 Saturn in the 3rd house or Capricorn on the 3rd house cusp natives may have an age gap with their sibling(s) or be an only child. I often see this placement play out as being the youngest. Ex: Kylie Jenner, BTS's Jungkook
💼 Saturn in the 3rd house natives may have problems with school. They may not have a proper education, be homeschooled, or delay education. Ex: Justin Bieber, Kylie Jenner, BTS's Jungkook
💡The 3rd house and Mercury are more important than we might think since they govern our thoughts. "I think therefore I am." -René Descartes. What people think and talk often depicts their personality. Pay attention to Mercury aspects (with small orbs and especially hard aspects), houses, signs, and your 3rd house placements since they describe a lot about your personality. People may notice and assume your personality based on how you communicate, which is ruled by your Mercury and 3rd house placements, even more than your Sun sign.
🎒 For example:
🏀 Mars/Mercury aspects and Mars 3rd house natives talk like an Aries. These people may spill out whatever they think with no filter, hurt you with their words and give zero fuck. They may have lots of angry thoughts, talk super fast, be sassy and combative, and have the potential to be a rapper. Ex: Blackpink's Jennie (Mars conjunct Mercury)
📒 Venus/Mercury aspects and Venus 3rd house people are sweet talkers, diplomatic, laid-back, and chill (Taurus, Libra). They may have talents in art, music, and writing. Venus 3rd house people can talk like a Gemini and love anything Gemini-related. Ex: Blackpink's Rosé (Venus sextile Mercury)
🎓 Mercury conjunct Saturn people may be one of the most serious thinkers I've known. These people think cautiously, structurally, and systematically (other Saturn/Mercury aspects to a lesser extent). The rigid, factual, black-and-white thinking. They may dislike lying, rarely even joke, and spend most of their life to build a huge logical system or theory in any subject they're interested in. They can excel at mathematics, sciences, politics, philosophy, etc. Ex: Albert Einstein, Thomas Hobbes, Harry Truman
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Divided lines by @strangergraphics
Thank you for reading 📏!
81 notes · View notes
4gravitons · 10 months ago
Text
Why Quantum Gravity Is Controversial
Why quantum gravity is hard is one thing. Why it's controversial is something else:
Merging quantum mechanics and gravity is a famously hard physics problem. Explaining why merging quantum mechanics and gravity is hard is, in turn, a very hard science communication problem. The more popular descriptions tend to lead to misunderstandings, and I’ve posted many times over the years to chip away at those misunderstandings. Merging quantum mechanics and gravity is hard…but despite…
2 notes · View notes
blueteller · 9 months ago
Note
Do you know how smart Cale actually is? Like- what extent his intelligence can reach?
That's an interesting question! Let's take a look.
From what I know of IQ scores, anything above 120 puts you in top 10% of the population. So I easily see Kim Rok Soo!Cale belonging in that category; of >120 IQ. However, IQ had always felt a little vague to me. It's nice to have a number to put on a scale and all, but what does it actually mean in reality? Let's try this from a different angle.
Gardner's Multiple Intelligences model of divides talent into eight categories, plus one additional one:
Visual-spatial
Linguistic-verbal
Logical-mathematical
Body-kinesthetic
Musical
Interpersonal
Intrapersonal
Naturalistic
Existential
Why not try to measure him up against each one, as no person is actually intelligent in every way and not even a fictional character can excel in all of them (unless they're a Mary Sue or something lol).
Visual and spatial judgment stands for easy reading, writing, puzzles solving, recognizing patterns and analyzing charts well. I think Cale is definitely a pro in this category; he does loves reading and he's fantastic at analyzing data.
Linguistic-verbal is for remember written and spoken information, debates, giving persuasive speeches, ability to explain things and skilled at verbal humor. And while I constantly make fun of Cale for not being able to explain himself, he IS good at using the "glib tongue" and being persuasive, so I think he is very skilled in this category as well.
Logical-mathematical means having excellent problem-solving skills, the ability to come up with abstract ideas and conduct scientific experiments, as well as computing complex issues. Cale is an incredible strategist able to change his plans in an instant, so he is definitely a genius in this field.
Bodily-Kinesthetic Intelligence is a fun one, because I think it's the hardest one to judge, considering that he literally changed bodies. It of course stands for sports, dancing, craftmanship, physical coordination, and remembering better by practice rather than learning theory. Cale... does not like that. However, it doesn't mean he's BAD at it. If he was a genius in this field, however, I believe he would like it a bit more. Thus – I suspect he was average. In the past he was forced to exercise for the sake of survival, but once he was given the option of taking it easy, he quit instantly. He is capable, but does not have any particular predisposition for it.
Musical Intelligence drives me nuts, because we literally do not know, and I dearly wish I did. There was not a single mention of it in the whole series. As much as I want to believe in a cool headcanon of KRS being an unrealized musical genius... I think he was probably average or below average in this.
Interpersonal Intelligence stands for communication, conflict-solving, perception and the ability to forge connections with others. And while you might have some doubts about Cale, I say he IS a total pro in this. Those are all leadership skills, and Cale is one HELL of a great leader.
However...
Intrapersonal Intelligence is where Cale is severely lacking. It could be partially due to trauma, but I think at least some of it comes through his natural personality. It stands for introspection, self-reflection, the ability to understand one's motivation and general self-awareness; and that is Cale's biggest weakness, one that might actually cost him his slacker life dream in the end, due to all the misunderstandings he causes.
The last two, Naturalistic and Existential Intelligence types, are also not really Cale's forte. The first is for things like botany, biology, and zoology, paired with enjoyment of camping and hiking – none of which Cale actually does for pleasure, only because he has to. And yeah, farming is in that category too, but it's not like Cale is actually a real farmer just yet. And the second is for stuff like philosophy, considering how current actions influence future outcomes, the ability to see situations from an outside perspective and reflections into the meaning of life and death – and Cale is REALLY not interested in this type of self reflection.
Which leaves Cale with 4 types of intelligence he excels at, 2 which he is REALLY BAD at, 1 where he's below average and 1 he's probably average, with 1 left completely unknown.
Does this make Cale a genius? Pretty much, yes. Does it also make him stupid in very specific ways? VERY MUCH, YES.
214 notes · View notes