One thing I end up thinking about a lot is the relationship between The Origin and Alpha 01.
At this time, both are characters we know very little about. We know what the Origin is to the BlazBlue story, but we really know nothing about what she was as a person, in her lifetime. With Alpha, we have her design and name, but everything else we can only glean through implication and assumption.
Now I’m not entirely sure about this, but between them, the only thing we know for certain was that the Origin was the first PFD to make it to the Amaterasu Unit.
I may be misremembering- correct me with sources if I’m wrong- but I don’t think we know for sure that she was the first PFD overall (within her line or otherwise) OR that she was even the first to be sent into the Boundary. All we know as absolute fact is that she was the first that made it to their goal, the one who took over the Amaterasu Unit, and the one who functionally ‘gave’ or ‘earned’ souls to/for all her ‘sister’ units.
As for Alpha, given her name and her design, she seems to be one of the Boundary Interface Prime Field Devices, like Lambda, Mu, and Nu. We absolutely don’t know that for sure, though. We also don’t know when she existed- was it during the Origin’s era, or early in the BlazBlue era/world/possibility? Or did she appear in both? Further, what on earth is her connection to Ragna- assuming there is one to begin with.
I do believe there is some link. Another creator I’m aware of, the man behind Fairy Tail and Eden’s Zero, likes to make characters that look the same for Doylist reasons- they’re a reference for himself and the readers, but don’t mean anything in canon/in universe. It’s my understanding that Mori isn’t doing that. Similarities between characters in the overall BlazBlue series have, thus far, almost always been Watsonian- that is to say, there is an in-story reason behind them.
With that on my mind, I am positive that Alpha and Ragna look alike for a purposeful reason. One that I desperately hope we will get enough future BlazBlue content to learn about.
My current wild, off-the-cuff, working theory is that Alpha originates from the pre-BlazBlue world, the one that created the Origin. One idea would be that Alpha was created before the Origin, perhaps within the same line of PFDs, and that she would therefore be a ‘big sister’ in the eyes of the Origin after they both gained their souls/sentience.
Alternatively, regardless of when she was made in relation to the Origin, I’d theorize that Alpha played a key part in the Prime Field War that led the Origin to recreate the world. I imagine Alpha fighting for her sisters during the Prime Field War, and ending up coming across as a ‘protector’ figure to the Origin.
If this were true, Alpha would have a sort of special relationship with the Origin trapped watching the war unfold from the Amaterasu Unit. It would be for this reason that, when making her new world, the world of BlazBlue that we know of, the Origin would base her ‘Central Fiction’ off of Alpha. Ragna would be created to fill Alpha’s role as the protector, the older sibling, the hero that the Origin saw her as.
But, for now, this is just some vague theory I have. It’s just how I imagine these plot threads would be resolved. Honestly, it doesn’t feel entirely BlazBlue to me, so I certainly won’t be surprised if I’m entirely off the mark when if we see these characters explored officially.
But, to summarize my main points;
I am confident that Alpha 01 and the Origin are not the same person.
I’m not convinced that the Origin was the first PFD at all; only that she was the first to acquire a soul and/or the first to make contact with the Amaterasu Unit.
I’m not sure that Ragna, as we know him, existed at all during the Origin’s lifetime. I theorize that she invented him, based on someone else in her life, when she created the BlazBlue world.
If the above is true, I believe Alpha would be that person.
Like I mentioned before, if anything from this theory has already been deconfirmed by canon, please let me know (with a source!) I get my fun in fandoms by treating them kind of like puzzles to solve; I wanna take my little clues and put together the most accurate picture I can.
15 notes
·
View notes
Even if you think AI search could be good, it won’t be good
TONIGHT (May 15), I'm in NORTH HOLLYWOOD for a screening of STEPHANIE KELTON'S FINDING THE MONEY; FRIDAY (May 17), I'm at the INTERNET ARCHIVE in SAN FRANCISCO to keynote the 10th anniversary of the AUTHORS ALLIANCE.
The big news in search this week is that Google is continuing its transition to "AI search" – instead of typing in search terms and getting links to websites, you'll ask Google a question and an AI will compose an answer based on things it finds on the web:
https://blog.google/products/search/generative-ai-google-search-may-2024/
Google bills this as "let Google do the googling for you." Rather than searching the web yourself, you'll delegate this task to Google. Hidden in this pitch is a tacit admission that Google is no longer a convenient or reliable way to retrieve information, drowning as it is in AI-generated spam, poorly labeled ads, and SEO garbage:
https://pluralistic.net/2024/05/03/keyword-swarming/#site-reputation-abuse
Googling used to be easy: type in a query, get back a screen of highly relevant results. Today, clicking the top links will take you to sites that paid for placement at the top of the screen (rather than the sites that best match your query). Clicking further down will get you scams, AI slop, or bulk-produced SEO nonsense.
AI-powered search promises to fix this, not by making Google search results better, but by having a bot sort through the search results and discard the nonsense that Google will continue to serve up, and summarize the high quality results.
Now, there are plenty of obvious objections to this plan. For starters, why wouldn't Google just make its search results better? Rather than building a LLM for the sole purpose of sorting through the garbage Google is either paid or tricked into serving up, why not just stop serving up garbage? We know that's possible, because other search engines serve really good results by paying for access to Google's back-end and then filtering the results:
https://pluralistic.net/2024/04/04/teach-me-how-to-shruggie/#kagi
Another obvious objection: why would anyone write the web if the only purpose for doing so is to feed a bot that will summarize what you've written without sending anyone to your webpage? Whether you're a commercial publisher hoping to make money from advertising or subscriptions, or – like me – an open access publisher hoping to change people's minds, why would you invite Google to summarize your work without ever showing it to internet users? Nevermind how unfair that is, think about how implausible it is: if this is the way Google will work in the future, why wouldn't every publisher just block Google's crawler?
A third obvious objection: AI is bad. Not morally bad (though maybe morally bad, too!), but technically bad. It "hallucinates" nonsense answers, including dangerous nonsense. It's a supremely confident liar that can get you killed:
https://www.theguardian.com/technology/2023/sep/01/mushroom-pickers-urged-to-avoid-foraging-books-on-amazon-that-appear-to-be-written-by-ai
The promises of AI are grossly oversold, including the promises Google makes, like its claim that its AI had discovered millions of useful new materials. In reality, the number of useful new materials Deepmind had discovered was zero:
https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs
This is true of all of AI's most impressive demos. Often, "AI" turns out to be low-waged human workers in a distant call-center pretending to be robots:
https://pluralistic.net/2024/01/31/neural-interface-beta-tester/#tailfins
Sometimes, the AI robot dancing on stage turns out to literally be just a person in a robot suit pretending to be a robot:
https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain
The AI video demos that represent "an existential threat to Hollywood filmmaking" turn out to be so cumbersome as to be practically useless (and vastly inferior to existing production techniques):
https://www.wheresyoured.at/expectations-versus-reality/
But let's take Google at its word. Let's stipulate that:
a) It can't fix search, only add a slop-filtering AI layer on top of it; and
b) The rest of the world will continue to let Google index its pages even if they derive no benefit from doing so; and
c) Google will shortly fix its AI, and all the lies about AI capabilities will be revealed to be premature truths that are finally realized.
AI search is still a bad idea. Because beyond all the obvious reasons that AI search is a terrible idea, there's a subtle – and incurable – defect in this plan: AI search – even excellent AI search – makes it far too easy for Google to cheat us, and Google can't stop cheating us.
Remember: enshittification isn't the result of worse people running tech companies today than in the years when tech services were good and useful. Rather, enshittification is rooted in the collapse of constraints that used to prevent those same people from making their services worse in service to increasing their profit margins:
https://pluralistic.net/2024/03/26/glitchbread/#electronic-shelf-tags
These companies always had the capacity to siphon value away from business customers (like publishers) and end-users (like searchers). That comes with the territory: digital businesses can alter their "business logic" from instant to instant, and for each user, allowing them to change payouts, prices and ranking. I call this "twiddling": turning the knobs on the system's back-end to make sure the house always wins:
https://pluralistic.net/2023/02/19/twiddler/
What changed wasn't the character of the leaders of these businesses, nor their capacity to cheat us. What changed was the consequences for cheating. When the tech companies merged to monopoly, they ceased to fear losing your business to a competitor.
Google's 90% search market share was attained by bribing everyone who operates a service or platform where you might encounter a search box to connect that box to Google. Spending tens of billions of dollars every year to make sure no one ever encounters a non-Google search is a cheaper way to retain your business than making sure Google is the very best search engine:
https://pluralistic.net/2024/02/21/im-feeling-unlucky/#not-up-to-the-task
Competition was once a threat to Google; for years, its mantra was "competition is a click away." Today, competition is all but nonexistent.
Then the surveillance business consolidated into a small number of firms. Two companies dominate the commercial surveillance industry: Google and Meta, and they collude to rig the market:
https://en.wikipedia.org/wiki/Jedi_Blue
That consolidation inevitably leads to regulatory capture: shorn of competitive pressure, the companies that dominate the sector can converge on a single message to policymakers and use their monopoly profits to turn that message into policy:
https://pluralistic.net/2022/06/05/regulatory-capture/
This is why Google doesn't have to worry about privacy laws. They've successfully prevented the passage of a US federal consumer privacy law. The last time the US passed a federal consumer privacy law was in 1988. It's a law that bans video store clerks from telling the newspapers which VHS cassettes you rented:
https://en.wikipedia.org/wiki/Video_Privacy_Protection_Act
In Europe, Google's vast profits lets it fly an Irish flag of convenience, thus taking advantage of Ireland's tolerance for tax evasion and violations of European privacy law:
https://pluralistic.net/2023/05/15/finnegans-snooze/#dirty-old-town
Google doesn't fear competition, it doesn't fear regulation, and it also doesn't fear rival technologies. Google and its fellow Big Tech cartel members have expanded IP law to allow it to prevent third parties from reverse-engineer, hacking, or scraping its services. Google doesn't have to worry about ad-blocking, tracker blocking, or scrapers that filter out Google's lucrative, low-quality results:
https://locusmag.com/2020/09/cory-doctorow-ip/
Google doesn't fear competition, it doesn't fear regulation, it doesn't fear rival technology and it doesn't fear its workers. Google's workforce once enjoyed enormous sway over the company's direction, thanks to their scarcity and market power. But Google has outgrown its dependence on its workers, and lays them off in vast numbers, even as it increases its profits and pisses away tens of billions on stock buybacks:
https://pluralistic.net/2023/11/25/moral-injury/#enshittification
Google is fearless. It doesn't fear losing your business, or being punished by regulators, or being mired in guerrilla warfare with rival engineers. It certainly doesn't fear its workers.
Making search worse is good for Google. Reducing search quality increases the number of queries, and thus ads, that each user must make to find their answers:
https://pluralistic.net/2024/04/24/naming-names/#prabhakar-raghavan
If Google can make things worse for searchers without losing their business, it can make more money for itself. Without the discipline of markets, regulators, tech or workers, it has no impediment to transferring value from searchers and publishers to itself.
Which brings me back to AI search. When Google substitutes its own summaries for links to pages, it creates innumerable opportunities to charge publishers for preferential placement in those summaries.
This is true of any algorithmic feed: while such feeds are important – even vital – for making sense of huge amounts of information, they can also be used to play a high-speed shell-game that makes suckers out of the rest of us:
https://pluralistic.net/2024/05/11/for-you/#the-algorithm-tm
When you trust someone to summarize the truth for you, you become terribly vulnerable to their self-serving lies. In an ideal world, these intermediaries would be "fiduciaries," with a solemn (and legally binding) duty to put your interests ahead of their own:
https://pluralistic.net/2024/05/07/treacherous-computing/#rewilding-the-internet
But Google is clear that its first duty is to its shareholders: not to publishers, not to searchers, not to "partners" or employees.
AI search makes cheating so easy, and Google cheats so much. Indeed, the defects in AI give Google a readymade excuse for any apparent self-dealing: "we didn't tell you a lie because someone paid us to (for example, to recommend a product, or a hotel room, or a political point of view). Sure, they did pay us, but that was just an AI 'hallucination.'"
The existence of well-known AI hallucinations creates a zone of plausible deniability for even more enshittification of Google search. As Madeleine Clare Elish writes, AI serves as a "moral crumple zone":
https://estsjournal.org/index.php/ests/article/view/260
That's why, even if you're willing to believe that Google could make a great AI-based search, we can nevertheless be certain that they won't.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/05/15/they-trust-me-dumb-fucks/#ai-search
Image:
Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0
https://creativecommons.org/licenses/by/3.0/deed.en
--
djhughman
https://commons.wikimedia.org/wiki/File:Modular_synthesizer_-_%22Control_Voltage%22_electronic_music_shop_in_Portland_OR_-_School_Photos_PCC_%282015-05-23_12.43.01_by_djhughman%29.jpg
CC BY 2.0
https://creativecommons.org/licenses/by/2.0/deed.en
1K notes
·
View notes
One thing I noticed talking about Linux and free software is that a lot of people seem afraid of learning things about technology. I constantly read things like "I hate windows, but switching to linux would mean learning a new OS, and you have to be some super-smart programmer-hacker to do that." Or even: "Switching to firefox would mean switching browsers and I don't know how"
And that is precisely the attitude tech companies like Microsoft and Apple try to instill in their users in order to control them. They create these simple and “friendly” user interfaces for their products, but these hide information. From their OS being pre-installed to their settings apps, they keep people from learning things about how their computer works, and letting the companies make the decisions for their users.
I think people are underestimating themselves and overestimating how hard it is to learn new things are. It is like Windows/Macos have taught them some kind of technological learned helplessness. Not knowing how computers work and being afraid to learn how is how companies like Microsoft controls you, and justifies that control.
For example, people hate the forced and automatic system updates on Windows. And Microsoft justifies it as necessary because some people don’t know that their computer needs security updates and therefore don’t update, so they have to force the updates on them. That’s definitely true, and Microsoft’s tech support people is definitely very aware of that but it is a operating system that presumes that the user is incompetent and therefore shouldn’t control their own computer. And of course Microsoft abuses that power to force privacy-invading features on their users. Windows updates are also badly designed in comparison, no Linux distro I’ve used required the update program to hijack the entire computer, preventing the user from doing other things, but Windows does.
This is the dark side of “user-friendly” design. By requiring zero knowledge and zero responsibility for the user, they also take control away from the user. User-friendly graphical user interfaces (GUI) can also hide the inner workings of a system in comparison to the command line, which enables more precise control of your computer and give you more knowledge about what it is doing.
Even GUIs are not all made equal in regards to this, as the comparison between the Windows Control panel and their newer Settings app demonstrates. As I complained about before, Windows have hidden away the powerful, but complex Control Panel in favor of the slicker-looking but simplified and less powerful Settings app for over a decade now.
Of course this is a sliding scale, and there is a sensible middle-ground between using the command line for everything and user-friendly design masking taking control away from the end user.
There are Linux distros like Linux Mint and MX Linux who have created their own GUI apps for tasks that would otherwise use the command line, without taking control away from the user. This is mainly because they are open source non-profit community-driven distros, instead of being proprietary OSes made by profit-driven megacorps.
Still, giving that control to the user presumes some knowledge and responsibility on part of the user. To return to the update example, by default both Mint and MX will search and notify you of available updates, but you will have to take the decision to download and install them. Automatic updates are available in both cases, but it’s opt-in, you have to enable that option yourself. And that approach presumes that you know that you should update your system to plug security holes, something not all people do. It gives you control because it presumes you have knowledge and can take responsibility for those decisions.
All this also applies to the underlying fact that practically all pre-built computers nowadays have an operating system pre-installed. Few people install an OS themselves nowadays, instead they use whatever came with the computer. It’s usually either Windows or MacOS for desktops/laptops, and Android/IOS for smartphones (which are also a type of computer).
Now all this is very convenient and user-friendly, since it means you don’t have to learn how to install your own operating system. The OEM takes care of that for you. But again, this is a convenience that takes choice away from you. If you don’t learn how to install your own OS, you are stuck with whatever that is on the computer you bought. It’s probably precisely this step that scares people away from Linux, few people have installed even Windows, and installing your own OS seems impossibly scary. But again, learning is the only way to take back control. If you learn how to install an OS off an USB stick, you now have choices in what OS to use. (Sidenote: the hard part IMO is not the actual install process, but fiddling with the BIOS so it will actually boot from the distro on the USB stick. This old comic strip illustrates this very well).
That’s how life is in general, not just computers. Having control over your life means making decisions based on your own judgment. And to make sensible, rational decisions, you have to learn things, acquire knowledge.
The only other alternative is letting others take those decisions for you. You don’t have to learn anything, but you have no control. And in the tech world, that means big corporations like Microsoft, Google and Apple will make those decisions, and they are motivated by their own profits, not your well-being.
Computers have only become more and more capable and more important in our lives, and that can enable wonderful things. But it also means more power to the tech companies, more power over our lives. And the only way to resist that is to learn about computers, to enable us to make our own decisions about how we use technology.
781 notes
·
View notes