#how to make an api call
Explore tagged Tumblr posts
Text
Non-techical people making technical decisions is how you get 6× as many developer hired to write the frontend website code as the number of developers in the entire infrastructure team.
#codeblr#progblr#then when velocity is slow they hire another frontend developer#you know how little you value ux designers?#you should value frontend developers that much#thats how many you need to hire#also value ux more my guy#its actually good if blind people can use your website#even though it makes features come out “slower” and you cant see what changed in the website#your team should comprise of mostly backend developers#i say frontend but technically were “fullstack” developers#my hot take is that “fullstack” is just a fancier word for frontend#we write javascript and just enough of the serverside to call the apis/libraries that the real backend developers write#“Fullstack” “engineer” lmao gimme a break#That said “fullstack senior engineer” is definitely on my resume
1 note
·
View note
Text
having just sort of a Night
#could physically Feel myself getting to that point of “hasn't seen humans in long enough that it's Bad”#this usually hits for me around the 72 hour mark moving up or down depending on how long it's been since i've shared a bad#but it's also that tipping point where i'm in a 50/50 split between “oh i need humans” and “actually what if i just didn't make an effort t#see anyone again ever"#was leaning hard towards option two when meg had to cancel which is when the [i'm in danger] feeling Hit#i don't feel. like. BAD. but i'm having an adjustment coming off gabapentin so i Need to do things that give me purpose#and i was halfway through cleaning the apartment when they called#stopped dead intending to finish and simply Didn't#but i fed myself switched my laundry and did some actual flight rising planning#and finally and i'm most proud of this one#i FINALLY quit my part time job#i fully intended to give them two week's notice but kept procrastinating then got hit with massive guilt which of course got worse#my boss was really nice about it and i guess one week is better than nothing#i have a feeling i'm going to feel much better tomorrow and that my executive function is going to improve bc that was REALLY weighing on m#idk why i just couldn't fucking make myself do it#i even fucking brought it up in therapy fully intending to quit that day#and. Didn't.#oh i also emailed my therapist to discuss esa paperwork! AND i read fetch api documentation in prep for maaaaybe testing into the advanced#code the dream class#i guess i did a lot today it just feels like all i did was sit in front of the tv#i'll feel better tomorrow. i will.#thing is. i'm much better at coping with being unexpectedly alone than coping with being unexpectedly with people.#i know how this works. i'll be okay. i'll be okay#i'm going to finish my audiobook and go to work and code and text my friends#i will be fine#i just feel a little lonely and weird tonight and i need more vitamin d and also to remember to take my meds#thane.txt
1 note
·
View note
Text
Oh I've got a fun idea for a unique way to do a story: it's a fictional wiki page documenting the discoveries of a group of reverse engineers trying to figure out how to hack demons. They've got an API to make contracts with demons.
They've figured out to cast spells out of magical code stored in the blockchain (because of course it's a block chain, it's hell)
There's a list of spells people have found, with comments on what they might do, reports on experimentation, and attempts to decode the "source".
Like there's a subsection with a name that won't render properly because you don't have the proper demonic fonts installed, but it's got the reporting name "shinigami eyes". It's a simple divination spell, so called because it makes numbers appear over the heads of people.
They've got a home-patched version to switch it to arabic numerals for the non-hackery who can't read demonic numerals (they're base 6, of course), and they've been slowly brute forcing the different stats they can query.
The first success was a number that represents the number of days it's been since you've visited a library. Apparently that's one of the statistics stored in your the soul! And weirdly, it counts down? The spell has to query the per-person LIBRARY_THRESHOLD and then subtract from it the LIBRARY_CURRENT to get the displayed count.
It could even be real wiki: keep expanding it by adding additional pages for in-universe discoveries, like... the time they figured out how you can get test animals to cast spells from their own soul (which, being without original sin, have effectively infinite reserves).
P. S. Okay that one got me so I can't end here: they have a list of animals it doesn't work with. The implication being that some animals DO have original sin, and even better yet: these hellhackers only figured that out by accidentally selling a horse's soul to Beelzebub.
839 notes
·
View notes
Text
2023, The Year of Self-Sabotage
Has anyone noticed the trend businesses have been on in 2023? There's a LOT of self-sabotage going on in the business world. Throughout my life, and everyone else has their own observations too, once in a while you see a company make a boneheaded decision about their product or service. And once in a while you'll see a decision get made that is bad, but maybe it at least has some justification (even to an anti-capitalist goober like myself). But this year has been nonsensical moves of greed or product/service sabotage that make no sense for longevity or harm what's in the best interest of the consumer.
Activision-Blizzard: The Overwatch debacle, and Diablo Immortal's scummy practices.
Netflix: The account sharing debacle.
Twitter: Maximum divorced loser Elon Musk destroying its functionality and branding and we still call it Twitter.
Reddit: Inspired by Musk's stupidity, the API tools debacle. Shame on the Reddit communities for not knowing how to strike btw (you don't put a time limit on it).
Hollywood: Pulling shows and films from streaming services to declare them as failed products and somehow get a tax write-off for it.
Also Hollywood: Willing to take quarterly losses greater than the annual cost to meet the demands of two striking unions put together.
Unity: Announced in the past day that it will charge developers a fee for installations because greed.
Titan Submersible: "Safety is for losers" says billionaire who proceeds to use his shoddy tech to do a murder-suicide.
Starbucks: Breaking ALL of the labor laws to try and stop unionization. Admittedly a reach to be on this list but the situation (like all the others) is ongoing and can compound.
Embracer: A massive corporate company that bought a bunch of smaller companies. Thought a 2 billion dollar deal with the Saudi government was a sure thing, so they spent 2 billion dollars on stuff. Deal falls through, so they start closing companies they acquired.
That's just the ones I can remember off the top of my head. These aren't business decisions done for the sake of consumers. These are all decisions done to spite consumers or the workers who produce the products and services.
People try to remember years as being the "year of" something. And it's a thing I do too. For me, 2023 is the year of corporate self-sabotage.
3K notes
·
View notes
Text
DXVK Tips and Troubleshooting: Launching The Sims 3 with DXVK
A big thank you to @heldhram for additional information from his recent DXVK/Reshade tutorial! ◀ Depending on how you launch the game to play may affect how DXVK is working.
During my usage and testing of DXVK, I noticed substantial varying of committed and working memory usage and fps rates while monitoring my game with Resource Monitor, especially when launching the game with CCMagic or S3MO compared to launching from TS3W.exe/TS3.exe.
It seems DXVK doesn't work properly - or even at all - when the game is launched with CCM/S3MO instead of TS3W.exe/TS3.exe. I don't know if this is also the case using other launchers from EA/Steam/LD and misc launchers, but it might explain why some players using DXVK don't see any improvement using it.
DXVK injects itself into the game exe, so perhaps using launchers bypasses the injection. From extensive testing, I'm inclined to think this is the case.
Someone recently asked me how do we know DXVK is really working. A very good question! lol. I thought as long as the cache showed up in the bin folder it was working, but that was no guarantee it was injected every single time at startup. Until I saw Heldhram's excellent guide to using DXVK with Reshade DX9, I relied on my gaming instincts and dodgy eyesight to determine if it was. 🤭
Using the environment variable Heldhram referred to in his guide, a DXVK Hud is added to the upper left hand corner of your game screen to show it's injected and working, showing the DXVK version, the graphics card version and driver and fps.
This led me to look further into this and was happy to see that you could add an additional line to the DXVK config file to show this and other relevant information on the HUD such as DXVK version, fps, memory usage, gpu driver and more. So if you want to make sure that DXVK is actually injected, on the config file, add the info starting with:
dxvk.hud =
After '=', add what you want to see. So 'version' (without quotes) shows the DXVK version. dxvk.hud = version
You could just add the fps by adding 'fps' instead of 'version' if you want.
The DXVK Github page lists all the information you could add to the HUD. It accepts a comma-separated list for multiple options:
devinfo: Displays the name of the GPU and the driver version.
fps: Shows the current frame rate.
frametimes: Shows a frame time graph.
submissions: Shows the number of command buffers submitted per frame.
drawcalls: Shows the number of draw calls and render passes per frame.
pipelines: Shows the total number of graphics and compute pipelines.
descriptors: Shows the number of descriptor pools and descriptor sets.
memory: Shows the amount of device memory allocated and used.
allocations: Shows detailed memory chunk suballocation info.
gpuload: Shows estimated GPU load. May be inaccurate.
version: Shows DXVK version.
api: Shows the D3D feature level used by the application.
cs: Shows worker thread statistics.
compiler: Shows shader compiler activity
samplers: Shows the current number of sampler pairs used [D3D9 Only]
ffshaders: Shows the current number of shaders generated from fixed function state [D3D9 Only]
swvp: Shows whether or not the device is running in software vertex processing mode [D3D9 Only]
scale=x: Scales the HUD by a factor of x (e.g. 1.5)
opacity=y: Adjusts the HUD opacity by a factor of y (e.g. 0.5, 1.0 being fully opaque).
Additionally, DXVK_HUD=1 has the same effect as DXVK_HUD=devinfo,fps, and DXVK_HUD=full enables all available HUD elements.
desiree-uk notes: The site is for the latest version of DXVK, so it shows the line typed as 'DXVK_HUD=devinfo,fps' with underscore and no spaces, but this didn't work for me. If it also doesn't work for you, try it in lowercase like this: dxvk.hud = version Make sure there is a space before and after the '=' If adding multiple HUD options, seperate them by a comma such as: dxvk.hud = fps,memory,api,version
The page also shows some other useful information regarding DXVK and it's cache file, it's worth a read. (https://github.com/doitsujin/dxvk)
My config file previously showed the DXVK version but I changed it to only show fps. Whatever it shows, it's telling you DXVK is working! DXVK version:
DXVK FPS:
The HUD is quite noticeable, but it's not too obstructive if you keep the info small. It's only when you enable the full HUD using this line: dxvk.hud = full you'll see it takes up practically half the screen! 😄 Whatever is shown, you can still interact with the screen and sims queue.
So while testing this out I noticed that the HUD wasn't showing up on the screen when launching the game via CCM and S3MO but would always show when clicking TS3W.exe. The results were consistent, with DXVK showing that it was running via TS3W.exe, the commited memory was low and steady, the fps didn't drop and there was no lag or stuttereing. I could spend longer in CAS and in game altogether, longer in my older larger save games and the RAM didn't spike as much when saving the game. Launching via CCM/S3MO, the results were sporadic, very high RAM spikes, stuttering and fps rates jumping up and down. There wasn't much difference from DXVK not being installed at all in my opinion.
You can test this out yourself, first with whatever launcher you use to start your game and then without it, clicking TS3.exe or TS3W.exe, making sure the game is running as admin. See if the HUD shows up or not and keep an eye on the memory usage with Resource Monitor running and you'll see the difference. You can delete the line from the config if you really can't stand the sight of it, but you can be sure DXVK is working when you launch the game straight from it's exe and you see smooth, steady memory usage as you play. Give it a try and add in the comments if it works for you or not and which launcher you use! 😊 Other DXVK information:
Make TS3 Run Smoother with DXVK ◀ - by @criisolate How to Use DXVK with Sims 3 ◀ - guide from @nornities and @desiree-uk
How to run The Sims 3 with DXVK & Reshade (Direct3D 9.0c) ◀ - by @heldhram
DXVK - Github ◀
106 notes
·
View notes
Note
it’s kind of ridiculous how much we like gimmick blogs here whereas gimmick accounts seem to be Tolerated at best on Twitter and other sites?
Depends on how you define "gimmick account," I guess. There used to be a lot more automated gimmick accounts on Twitter that were well liked, things like bots that would tweet out random quotes from a TV show every hour or tiny_star_field that would post random arrangements of unicode characters that would look like a starry sky on dark mode, before changes to the Twitter API under Elon killed most of them. (Though we still have things like the Every Breaking Bad/Better Call Saul Frame In Order account here and there.)
As far as other gimmick accounts go, though, I feel like the environment is just very different, especially today. To me at least, your Animals Going Goblin Modes and your ____ With Threatening Auras and whatnot often feel very cynical on Twitter. The exposed follower counts combined with the blue checks that boost them in the algorithm and allow them to earn ad revenue off their tweets make them feel like a grift. Just people reposting unsourced content en masse to game the algorithm, gain a ton of followers, and make a quick buck. The absolute worst is when these people straight up just sell their popular accounts to scammers or advertising vibrators or whatever.
You don't get any of that on Tumblr. Nobody can see your follower count. Nobody's approaching you for product placement deals. You aren't making a fucking cent off of your gimmick blog in 2024. You have to be in it for the love of the game.
139 notes
·
View notes
Text
It’s April, and the US is experiencing a self-inflicted trade war and a constitutional crisis over immigration. It’s a lot. It’s even enough to make you forget about Elon Musk’s so-called Department of Government Efficiency for a while. You shouldn’t.
To state the obvious: DOGE is still out there, chipping away at the foundations of government infrastructure. Slightly less obvious, maybe, is that the DOGE project has recently entered a new phase. The culling of federal workers and contracts will continue, where there’s anything left to cull. But from here on out, it’s all about the data.
Few if any entities in the world have as much access to as much sensitive data as the United States. From the start, DOGE has wanted as much of it as it could grab, and through a series of resignations, firings, and court cases, has mostly gotten its way.
In many cases it’s still unclear what exactly DOGE engineers have done or intend to do with that data. Despite Elon Musk’s protestations to the contrary, DOGE is as opaque as Vantablack. But recent reporting from WIRED and elsewhere begins to fill in the picture: For DOGE, data is a tool. It’s also a weapon.
Start with the Internal Revenue Service, where DOGE associates put the agency’s best and brightest career engineers in a room with Palantir folks for a few days last week. Their mission, as WIRED previously reported, was to build a “mega API” that would make it easier to view previously compartmentalized data from across the IRS in one place.
In isolation that may not sound so alarming. But in theory, an API for all IRS data would make it possible for any agency—or any outside party with the right permissions, for that matter—to access the most personal, and valuable, data the US government holds about its citizens. The blurriness of DOGE’s mission begins to gain focus. Even more, since we know that the IRS is already sharing its data in unprecedented ways: A deal the agency recently signed with the Department of Homeland Security provides sensitive information about undocumented immigrants.
It’s black-mirror corporate synergy, putting taxpayer data in the service of President Donald Trump’s deportation crusade.
It also extends beyond the IRS. The Washington Post reported this week that DOGE representatives across government agencies—from the Department of Housing and Urban Development to the Social Security Administration—are putting data that is normally cordoned off in service of identifying undocumented immigrants. At the Department of Labor, as WIRED reported Friday, DOGE has gained access to sensitive data about immigrants and farm workers.
And that’s just the data that stays within the government itself. This week NPR reported that a whistleblower at the National Labor Relations Board claims that staffers observed spikes in data leaving the agency after DOGE got access to its systems, with destinations unknown. The whistleblower further claims that DOGE agents appeared to take steps to “cover their tracks,” switching off or evading the monitoring tools that keep tabs on who’s doing what inside computer systems. (An NLRB spokesperson denied to NPR that DOGE had access to the agency’s systems.)
What could that data be used for? Anything. Everything. A company facing a union complaint at the NLRB could, as NPR notes, get access to “damaging testimony, union leadership, legal strategies and internal data on competitors.” There’s no confirmation that it’s been used for those things—but more to the point, there’s also currently no way to know either way.
That’s true also of DOGE’s data aims more broadly. Right now, the target is immigration. But it has hooks into so many systems, access to so much data, interests so varied both within and without government, there are very few limits to how or where it might next be deployed.
The spotlight shines a little less brightly on Elon Musk these days, as more urgent calamities take the stage. But DOGE continues to work in the wings. It has tapped into the most valuable data in the world. The real work starts when it puts that to use.
42 notes
·
View notes
Text
The enshittification of garage-door openers reveals a vast and deadly rot

I'll be at the Studio City branch of the LA Public Library on Monday, November 13 at 1830hPT to launch my new novel, The Lost Cause. There'll be a reading, a talk, a surprise guest (!!) and a signing, with books on sale. Tell your friends! Come on down!
How could this happen? Owners of Chamberlain MyQ automatic garage door openers just woke up to discover that the company had confiscated valuable features overnight, and that there was nothing they could do about it.
Oh, we know what happened, technically speaking. Chamberlain shut off the API for its garage-door openers, which breaks their integration with home automation systems like Home Assistant. The company even announced that it was doing this, calling the integration an "unauthorized usage" of its products, though the "unauthorized" parties in this case are the people who own Chamberlain products:
https://chamberlaingroup.com/press/a-message-about-our-decision-to-prevent-unauthorized-usage-of-myq
We even know why Chamberlain did this. As Ars Technica's Ron Amadeo points out, shutting off the API is a way for Chamberlain to force its customers to use its ad-beshitted, worst-of-breed app, so that it can make a few pennies by nonconsensually monetizing its customers' eyeballs:
https://arstechnica.com/gadgets/2023/11/chamberlain-blocks-smart-garage-door-opener-from-working-with-smart-homes/
But how did this happen? How did a giant company like Chamberlain come to this enshittening juncture, in which it felt empowered to sabotage the products it had already sold to its customers? How can this be legal? How can it be good for business? How can the people who made this decision even look themselves in the mirror?
To answer these questions, we must first consider the forces that discipline companies, acting against the impulse to enshittify their products and services. There are four constraints on corporate conduct:
I. Competition. The fear of losing your business to a rival can stay even the most sociopathic corporate executive's hand.
II. Regulation. The fear of being fined, criminally sanctioned, or banned from doing business can check the greediest of leaders.
III. Capability. Corporate executives can dream up all kinds of awful ways to shift value from your side of the ledger to their own, but they can only do the things that are technically feasible.
IV. Self-help. The possibility of customers modifying, reconfiguring or altering their products to restore lost functionality or neutralize antifeatures carries an implied threat to vendors. If a printer company's anti-generic-ink measures drives a customer to jailbreak their printers, the original manufacturer's connection to that customer is permanently severed, as the customer creates a durable digital connection to a rival.
When companies act in obnoxious, dishonest, shitty ways, they aren't merely yielding to temptation – they are evading these disciplining forces. Thus, the Great Enshittening we are living through doesn't reflect an increase in the wickedness of corporate leadership. Rather, it represents a moment in which each of these disciplining factors have been gutted by specific policies.
This is good news, actually. We used to put down rat poison and we didn't have a rat problem. Then we stopped putting down rat poison and rats are eating us alive. That's not a nice feeling, but at least we know at least one way of addressing it – we can start putting down poison again. That is, we can start enforcing the rules that we stopped enforcing, in living memory. Having a terrible problem is no fun, but the best kind of terrible problem to have is one that you know a solution to.
As it happens, Chamberlain is a neat microcosm for all the bad policy choices that created the Era of Enshittification. Let's go through them:
Competition: Chamberlain doesn't have to worry about competition, because it is owned by a private equity fund that "rolled up" all of Chamberlain's major competitors into a single, giant firm. Most garage-door opener brands are actually Chamberlain, including "LiftMaster, Chamberlain, Merlin, and Grifco":
https://www.lakewoodgaragedoor.biz/blog/the-history-of-garage-door-openers
This is a pretty typical PE rollup, and it exploits a bug in US competition law called "Antitrust's Twilight Zone":
https://pluralistic.net/2022/12/16/schumpeterian-terrorism/#deliberately-broken
When companies buy each other, they are subject to "merger scrutiny," a set of guidelines that the FTC and DoJ Antitrust Division use to determine whether the outcome is likely to be bad for competition. These rules have been pretty lax since the Reagan administration, but they've currently being revised to make them substantially more strict:
https://www.justice.gov/opa/pr/justice-department-and-ftc-seek-comment-draft-merger-guidelines
One of the blind spots in these merger guidelines is an exemption for mergers valued at less than $101m. Under the Hart-Scott-Rodino Act, these fly under the radar, evading merger scrutiny. That means that canny PE companies can roll up dozens and dozens of standalone businesses, like funeral homes, hospital beds, magic mushrooms, youth addiction treatment centers, mobile home parks, nursing homes, physicians’ practices, local newspapers, or e-commerce sellers:
http://www.economicliberties.us/wp-content/uploads/2022/12/Serial-Acquisitions-Working-Paper-R4-2.pdf
By titrating the purchase prices, PE companies – like Blackstone, owners of Chamberlain and all the other garage-door makers – can acquire a monopoly without ever raising a regulatory red flag.
But antitrust enforcers aren't helpless. Under (the long dormant) Section 7 of the Clayton Act, competition regulators can block mergers that lead to "incipient monopolization." The incipiency standard prevented monopolies from forming from 1914, when the Clayton Act passed, until the Reagan administration. We used to put down rat poison, and we didn't have rats. We stopped, and rats are gnawing our faces off. We still know where the rat poison is – maybe we should start putting it down again.
On to regulation. How is it possible for Chamberlain to sell you a garage-door opener that has an API and works with your chosen home automation system, and then unilaterally confiscate that valuable feature? Shouldn't regulation protect you from this kind of ripoff?
It should, but it doesn't. Instead, we have a bunch of regulations that protect Chamberlain from you. Think of binding arbitration, which allows Chamberlain to force you to click through an "agreement" that takes away your right to sue them or join a class-action suit:
https://pluralistic.net/2022/10/20/benevolent-dictators/#felony-contempt-of-business-model
But regulation could protect you from Chamberlain. Section 5 of the Federal Trade Commission Act allows the FTC to ban any "unfair and deceptive" conduct. This law has been on the books since 1914, but Section 5 has been dormant, forgotten and unused, for decades. The FTC's new dynamo chair, Lina Khan, has revived it, and is use it like a can-opener to free Americans who've been trapped by abusive conduct:
https://pluralistic.net/2023/01/10/the-courage-to-govern/#whos-in-charge
Khan's used Section 5 powers to challenge privacy invasions, noncompete clauses, and other corporate abuses – the bait-and-switch tactics of Chamberlain are ripe for a Section 5 case. If you buy a gadget because it has five features and then the vendor takes two of them away, they are clearly engaged in "unfair and deceptive" conduct.
On to capability. Since time immemorial, corporate leaders have fetishized "flexibility" in their business arrangements – like the ability to do "dynamic pricing" that changes how much you pay for something based on their guess about how much you are willing to pay. But this impulse to play shell games runs up against the hard limits of physical reality: grocers just can't send an army of rollerskated teenagers around the store to reprice everything as soon as a wealthy or desperate-looking customer comes through the door. They're stuck with crude tactics like doubling the price of a flight that doesn't include a Saturday stay as a way of gouging business travelers on an expense account.
With any shell-game, the quickness of the hand deceives the eye. Corporate crooks armed with computers aren't smarter or more wicked than their analog forebears, but they are faster. Digital tools allow companies to alter the "business logic" of their services from instant to instant, in highly automated ways:
https://pluralistic.net/2023/02/19/twiddler/
The monopoly coalition has successfully argued that this endless "twiddling" should not be constrained by privacy, labor or consumer protection law. Without these constraints, corporate twiddlers can engage in all kinds of ripoffs, like wage theft and algorithmic wage discrimination:
https://pluralistic.net/2023/04/12/algorithmic-wage-discrimination/#fishers-of-men
Twiddling is key to the Darth Vader MBA ("I am altering the deal. Pray I don't alter it further"), in which features are confiscated from moment to moment, without warning or recourse:
https://pluralistic.net/2023/10/26/hit-with-a-brick/#graceful-failure
There's no reason to accept the premise that violating your privacy, labor rights or consumer rights with a computer is so different from analog ripoffs that existing laws don't apply. The unconstrained twiddling of digital ripoff artists is a plague on billions of peoples' lives, and any enforcer who sticks up for our rights will have an army of supporters behind them.
Finally, there's the fear of self-help measures. All the digital flexibility that tech companies use to take value away can be used to take it back, too. The whole modern history of digital computers is the history of "adversarial interoperability," in which the sleazy antifeatures of established companies are banished through reverse-engineering, scraping, bots and other forms of technological guerrilla warfare:
https://www.eff.org/deeplinks/2019/10/adversarial-interoperability
Adversarial interoperability represents a serious threat to established business. If you're a printer company gouging on toner, your customers might defect to a rival that jailbreaks your security measures. That's what happened to Lexmark, who lost a case against the toner-refilling company Static Controls, which went on to buy Lexmark:
https://www.eff.org/deeplinks/2019/06/felony-contempt-business-model-lexmarks-anti-competitive-legacy
Sure, your customers are busy and inattentive and you can degrade the quality of your product a lot before they start looking for ways out. But once they cross that threshold, you can lose them forever. That's what happened to Microsoft: the company made the tactical decision to produce a substandard version of Office for the Mac in a drive to get Mac users to switch to Windows. Instead, Apple made Iwork (Pages, Numbers and Keynote), which could read and write every Office file, and Mac users threw away Office, the only Microsoft product they owned, permanently severing their relationship to the company:
https://www.eff.org/deeplinks/2019/06/adversarial-interoperability-reviving-elegant-weapon-more-civilized-age-slay
Today, companies can operate without worrying about this kind of self-help measure. There' a whole slew of IP rights that Chamberlain can enforce against you if you try to fix your garage-door opener yourself, or look to a competitor to sell you a product that restores the feature they took away:
https://locusmag.com/2020/09/cory-doctorow-ip/
Jailbreaking your Chamberlain gadget in order to make it answer to a rival's app involves bypassing a digital lock. Trafficking in a tool to break a digital lock is a felony under Section 1201 of the Digital Millennium Copyright, carrying a five-year prison sentence and a $500,000 fine.
In other words, it's not just that tech isn't regulated, allowing for endless twiddling against your privacy, consumer rights and labor rights. It's that tech is badly regulated, to permit unlimited twiddling by tech companies to take away your rightsand to prohibit any twiddling by you to take them back. The US government thumbs the scales against you, creating a regime that Jay Freeman aptly dubbed "felony contempt of business model":
https://pluralistic.net/2022/10/23/how-to-fix-cars-by-breaking-felony-contempt-of-business-model/
All kinds of companies have availed themselves of this government-backed superpower. There's DRM – digital locks, covered by DMCA 1201 – in powered wheelchairs:
https://www.eff.org/deeplinks/2022/06/when-drm-comes-your-wheelchair
In dishwashers:
https://pluralistic.net/2021/05/03/cassette-rewinder/#disher-bob
In treadmills:
https://pluralistic.net/2021/06/22/vapescreen/#jane-get-me-off-this-crazy-thing
In tractors:
https://pluralistic.net/2022/05/08/about-those-kill-switched-ukrainian-tractors/
It should come as no surprise to learn that Chamberlain has used DMCA 1201 to block interoperable garage door opener components:
https://scholarship.law.marquette.edu/cgi/viewcontent.cgi?article=1233&context=iplr
That's how we arrived at this juncture, where a company like Chamberlain can break functionality its customers value highly, solely to eke out a minuscule new line of revenue by selling ads on their own app.
Chamberlain bought all its competitors.
Chamberlain operates in a regulatory environment that is extremely tolerant of unfair and deceptive practices. Worse: they can unilaterally take away your right to sue them, which means that if regulators don't bestir themselves to police Chamberlain, you are shit out of luck.
Chamberlain has endless flexibility to unilaterally alter its products' functionality, in fine-grained ways, even after you've purchased them.
Chamberlain can sue you if you try to exercise some of that same flexibility to protect yourself from their bad practices.
Combine all four of those factors, and of course Chamberlain is going to enshittify its products. Every company has had that one weaselly asshole at the product-planning table who suggests a petty grift like breaking every one of the company's customers' property to sell a few ads. But historically, the weasel lost the argument to others, who argued that making every existing customer furious would affect the company's bottom line, costing it sales and/or fines, and prompting customers to permanently sever their relationship with the company by seeking out and installing alternative software. Take away all the constraints on a corporation's worst impulses, and this kind of conduct is inevitable:
https://pluralistic.net/2023/07/28/microincentives-and-enshittification/
This isn't limited to Chamberlain. Without the discipline of competition, regulation, self-help measures or technological limitations, every industry in undergoing wholesale enshittification. It's not a coincidence that Chamberlain's grift involves a push to move users into its app. Because apps can't be reverse-engineered and modified without risking DMCA 1201 prosecution, forcing a user into an app is a tidy and reliable way to take away that user's rights.
Think about ad-blocking. One in four web users has installed an ad-blockers ("the biggest boycott in world history" -Doc Searls). Zero app users have installed app-blockers, because they don't exist, because making one is a felony. An app is just a web-page wrapped in enough IP to make it a crime to defend yourself against corporate predation:
https://pluralistic.net/2023/08/27/an-audacious-plan-to-halt-the-internets-enshittification-and-throw-it-into-reverse/
The temptation to enshitiffy isn't new, but the ability to do so without consequence is a modern phenomenon, the intersection of weak policy enforcement and powerful technology. Your car is autoenshittified, a rolling rent-seeking platform that spies on you and price-gouges you:
https://pluralistic.net/2023/07/24/rent-to-pwn/#kitt-is-a-demon
Cars are in an uncontrolled skid over Enshittification Cliff. Honda, Toyota, VW and GM all sell cars with infotainment systems that harvest your connected phone's text-messages and send them to the corporation for data-mining. What's more, a judge in Washington state just ruled that this is legal:
https://therecord.media/class-action-lawsuit-cars-text-messages-privacy
While there's no excuse for this kind of sleazy conduct, we can reasonably anticipate that if our courts would punish companies for engaging in it, they might be able to resist the temptation. No wonder Mozilla's latest Privacy Not Included research report called cars "the worst product category we have ever reviewed":
https://foundation.mozilla.org/en/privacynotincluded/articles/its-official-cars-are-the-worst-product-category-we-have-ever-reviewed-for-privacy/
I mean, Nissan tries to infer facts about your sex life and sells those inferences to marketing companies:
https://foundation.mozilla.org/en/privacynotincluded/nissan/
But the OG digital companies are the masters of enshittification. Microsoft has been at this game for longer than anyone, and every day brings a fresh way that Microsoft has worsened its products without fear of consequence. The latest? You can't delete your OneDrive account until you provide an acceptable explanation for your disloyalty:
https://www.theverge.com/2023/11/8/23952878/microsoft-onedrive-windows-close-app-notification
It's tempting to think that the cruelty is the point, but it isn't. It's almost never the point. The point is power and money. Unscrupulous businesses have found ways to make money by making their products worse since the industrial revolution. Here's Jules Dupuis, writing about 19th century French railroads:
It is not because of the few thousand francs which would have to be spent to put a roof over the third-class carriages or to upholster the third-class seats that some company or other has open carriages with wooden benches. What the company is trying to do is to prevent the passengers who can pay the second class fare from traveling third class; it hits the poor, not because it wants to hurt them, but to frighten the rich. And it is again for the same reason that the companies, having proved almost cruel to the third-class passengers and mean to the second-class ones, become lavish in dealing with first-class passengers. Having refused the poor what is necessary, they give the rich what is superfluous.
https://www.tumblr.com/mostlysignssomeportents/731357317521719296/having-refused-the-poor-what-is-necessary-they
But as bad as all this is, let me remind you about the good part: we know how to stop companies from enshittifying their products. We know what disciplines their conduct: competition, regulation, capability and self-help measures. Yes, rats are gnawing our eyeballs, but we know which rat-poison to use, and where to put it to control those rats.
Competition, regulation, constraint and self-help measures all backstop one another, and while one or a few can make a difference, they are most powerful when they're all mobilized in concert. Think of the failure of the EU's landmark privacy law, the GDPR. While the GDPR proved very effective against bottom-feeding smaller ad-tech companies, the worse offenders, Meta and Google, have thumbed their noses at it.
This was enabled in part by the companies' flying an Irish flag of convenience, maintaining the pretense that they have to be regulated in a notorious corporate crime-haven:
https://pluralistic.net/2023/05/15/finnegans-snooze/#dirty-old-town
That let them get away with all kinds of shenanigans, like ignoring the GDPR's requirement that you should be able to easily opt out of data-collection without having to go through cumbersome "cookie consent" dialogs or losing access to the service as punishment for declining to be tracked.
As the noose has tightened around these surveillance giants, they're continuing to play games. Meta now says that the only way to opt out of data-collection in the EU is to pay for the service:
https://pluralistic.net/2023/10/30/markets-remaining-irrational/#steins-law
This is facially illegal under the GDPR. Not only are they prohibited from punishing you for opting out of collection, but the whole scheme ignores the nature of private data collection. If Facebook collects the fact that you and I are friends, but I never opted into data-collection, they have violated the GDPR, even if you were coerced into granting consent:
https://www.nakedcapitalism.com/2023/11/the-pay-or-consent-challenge-for-platform-regulators.html
The GDPR has been around since 2016 and Google and Meta are still invading 500 million Europeans' privacy. This latest delaying tactic could add years to their crime-spree before they are brought to justice.
But most of this surveillance is only possible because so much of how you interact with Google and Meta is via an app, and an app is just a web-page that's a felony to make an ad-blocker for. If the EU were to legalize breaking DRM – repealing Article 6 of the 2001 Copyright Directive – then we wouldn't have to wait for the European Commission to finally wrestle these two giant companies to the ground. Instead, EU companies could make alternative clients for all of Google and Meta's services that don't spy on you, without suffering the fate of OG App, which tried this last winter and was shut down by "felony contempt of business model":
https://pluralistic.net/2023/02/05/battery-vampire/#drained
Enshittification is demoralizing. To quote @wilwheaton, every update to the services we use inspires "dread of 'How will this complicate things as I try to maintain privacy and sanity in a world that demands I have this thing to operate?'"
https://wilwheaton.tumblr.com/post/698603648058556416/cory-doctorow-if-you-see-this-and-have-thoughts
But there are huge natural constituencies for the four disciplining forces that keep enshittification at bay.
Remember, Antitrust's Twilight Zone doesn't just allow rollups of garage-door opener companies – it's also poison for funeral homes, hospital beds, magic mushrooms, youth addiction treatment centers, mobile home parks, nursing homes, physicians’ practices, local newspapers, or e-commerce sellers.
The Binding Arbitration scam that stops Chamberlain customers from suing the company also stops Uber drivers from suing over stolen wages, Turbotax customers from suing over fraud, and many other victims of corporate crime from getting a day in court.
The failure to constrain twiddling to protect privacy, labor rights and consumer rights enables a host of abuses, from stalking, doxing and SWATting to wage theft and price gouging:
https://pluralistic.net/2023/11/06/attention-rents/#consumer-welfare-queens
And Felony Contempt of Business Model is used to screw you over every time you refill your printer, run your dishwasher, or get your Iphone's screen replaced.
The actions needed to halt and reverse this enshittification are well understood, and the partisans for taking those actions are too numerous to count. It's taken a long time for all those individuals suffering under corporate abuses to crystallize into a movement, but at long last, it's happening.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/11/09/lead-me-not-into-temptation/#chamberlain
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#monopolists#anticircumvention#myq#home assistant#pay or consent#enshittification#surveillance#autoenshittification#privacy#self-help measures#microsoft#onedrive#twiddling#comcom#competitive compatibility#interop#interoperability#adversarial interoperability#felony contempt of business model#darth vader mba
376 notes
·
View notes
Text
using LLMs to control a game character's dialogue seems an obvious use for the technology. and indeed people have tried, for example nVidia made a demo where the player interacts with AI-voiced NPCs:
youtube
this looks bad, right? like idk about you but I am not raring to play a game with LLM bots instead of human-scripted characters. they don't seem to have anything interesting to say that a normal NPC wouldn't, and the acting is super wooden.
so, the attempts to do this so far that I've seen have some pretty obvious faults:
relying on external API calls to process the data (expensive!)
presumably relying on generic 'you are xyz' prompt engineering to try to get a model to respond 'in character', resulting in bland, flavourless output
limited connection between game state and model state (you would need to translate the relevant game state into a text prompt)
responding to freeform input, models may not be very good at staying 'in character', with the default 'chatbot' persona emerging unexpectedly. or they might just make uncreative choices in general.
AI voice generation, while it's moved very fast in the last couple years, is still very poor at 'acting', producing very flat, emotionless performances, or uncanny mismatches of tone, inflection, etc.
although the model may generate contextually appropriate dialogue, it is difficult to link that back to the behaviour of characters in game
so how could we do better?
the first one could be solved by running LLMs locally on the user's hardware. that has some obvious drawbacks: running on the user's GPU means the LLM is competing with the game's graphics, meaning both must be more limited. ideally you would spread the LLM processing over multiple frames, but you still are limited by available VRAM, which is contested by the game's texture data and so on, and LLMs are very thirsty for VRAM. still, imo this is way more promising than having to talk to the internet and pay for compute time to get your NPC's dialogue lmao
second one might be improved by using a tool like control vectors to more granularly and consistently shape the tone of the output. I heard about this technique today (thanks @cherrvak)
third one is an interesting challenge - but perhaps a control-vector approach could also be relevant here? if you could figure out how a description of some relevant piece of game state affects the processing of the model, you could then apply that as a control vector when generating output. so the bridge between the game state and the LLM would be a set of weights for control vectors that are applied during generation.
this one is probably something where finetuning the model, and using control vectors to maintain a consistent 'pressure' to act a certain way even as the context window gets longer, could help a lot.
probably the vocal performance problem will improve in the next generation of voice generators, I'm certainly not solving it. a purely text-based game would avoid the problem entirely of course.
this one is tricky. perhaps the model could be taught to generate a description of a plan or intention, but linking that back to commands to perform by traditional agentic game 'AI' is not trivial. ideally, if there are various high-level commands that a game character might want to perform (like 'navigate to a specific location' or 'target an enemy') that are usually selected using some other kind of algorithm like weighted utilities, you could train the model to generate tokens that correspond to those actions and then feed them back in to the 'bot' side? I'm sure people have tried this kind of thing in robotics. you could just have the LLM stuff go 'one way', and rely on traditional game AI for everything besides dialogue, but it would be interesting to complete that feedback loop.
I doubt I'll be using this anytime soon (models are just too demanding to run on anything but a high-end PC, which is too niche, and I'll need to spend time playing with these models to determine if these ideas are even feasible), but maybe something to come back to in the future. first step is to figure out how to drive the control-vector thing locally.
48 notes
·
View notes
Text
Ok, hear me out.
!!calling all BoBoiBoy fans with this one!!
We know monsta like to end BoBoiBoy with a bang (this is in a literal sense like big anime fight scene)
Now we know that by this point it will probably end with all the elementals reaching their 3rd forms.
We already got rimba and beliung(kinda). And we almost got nova and blizzard. Now I would argue that halilintar already got an arc for his element.
So the only one left is solar and gempa.
So here's my idea, what if the final arc is about those two having troubles because of their elementals past with Retak'ka and Tok Kasa.
Like Solar would be scared/worried that he might turn out like Retak'ka. I mean BoBoiBoy losing control when he got a new power isn't uncommon.
(Looking at taufan, api and cahaya when they first came out. The more destructive one. I mean Hali got a villain moment!)
(It's funny that all that happened and then there's daun and duri. He's just silly ^_^ . And air and ais who are just chill. Pun intended.)
But plot twist, it's gempa who went out of control when he got crystal.
Because I think he should at least have 1 crazy moment. Even when he was first introduced, he was the only one in the ori trio to not go wrong.
And don't you dare say that guy doesn't have the capacity to be crazy and out of control. Have you seen gentar!? Half of that is at least him.
So what happened will be, he turn into crystal and go crazy. This make solar turn into gamma to stop him from hurting people. You can get the other elements join in too. Then he calms down and they all team up to fight whatever villain they're fighting.
And that ladies and gentlemen is my take on how the 3rd forms and evolution should happen.
Thank you for listening and mic drop.
(This whole thing happened cause I got bored and reread the whole comic looking for any clues for the future plot. And I found one scene in the comic that they kinda rewrote/cut out in the show.
Which was a scene of tok Kasa training BoBoiBoy with beliung if i'm correct. And BoBoiBoy ask him why he doesn't he just teach BoBoiBoy the crystal element. And Tok Kasa loudly said "No!" cause he doesn't remember how to unlock the power anymore. Which I would call bullcrap. This man lied about knowing Retak'ka, and for the record he lied a lot about other important informations in the past.
So why the secrets? So far from what we can tell gempa/tanah is a pretty stable element (pun intended). I know it's probably just the writers using it as an excuse so we could have plots and arcs but still. They better make up a reason for it.
I also want the elementals to have more plots with eachother and interact with eachother (other than designated duos like blaze & ice, duri with solar, taufan & Hali). The last one barely get any interaction nowadays. Also I really like gempa and he's my favourite.)
#random rambles#boboiboy#boboiboy galaxy#boboiboy solar#boboiboy halilintar#boboiboy gempa#boboiboy taufan#boboiboy ais#boboiboy blaze#boboiboy duri#boboiboy gentar#ramblings#boboiboy elementals
21 notes
·
View notes
Text
Here’s the third exciting installment in my series about backing up one Tumblr post that absolutely no one asked for. The previous updates are linked here.
Previously on Tumblr API Hell
Some blogs returned 404 errors. After investigating with Allie's help, it turns out it’s not a sideblog issue — it’s a privacy setting. It pleases me that Tumblr's rickety API respects the word no.
Also, shoutout to the one line of code in my loop that always broke when someone reblogged without tags. Fixed it.
What I got working:
Tags added during reblogs of the post
Any added commentary (what the blog actually wrote)
Full post metadata so I can extract other information later (ie. outside the loop)
New questions I’m trying to answer:
While flailing around in the JSON trying to figure out which blog added which text (because obviously Tumblr’s rickety API doesn’t just tell you), I found that all the good stuff lives in a deeply nested structure called trail. It splits content into HTML chunks — but there’s no guarantee about order, and you have to reconstruct it yourself.
Here’s a stylized diagram of what trail looks like in the JSON list (which gets parsed as a data frame in R):

I started wondering:
Can I use the trail to reconstruct a version tree to see which path through the reblog chain was the most influential for the post?
This would let me ask:
Which version of the post are people reblogging?
Does added commentary increase the chance it gets reblogged again?
Are some blogs “amplifiers” — their version spreads more than others?
It’s worth thinking about these questions now — so I can make sure I’m collecting the right information from Tumblr’s rickety API before I run my R code on a 272K-note post.
Summary
Still backing up one post. Just me, 600+ lines of R code, and Tumblr’s API fighting it out at a Waffle House parking lot. The code’s nearly ready — I’m almost finished testing it on an 800-note post before trying it on my 272K-note Blaze post. Stay tuned… Zero fucks given?
If you give zero fucks about my rickety API series, you can block my data science tag, #a rare data science post, or #tumblr's rickety API. But if we're mutuals then you know how it works here - you get what you get. It's up to you to curate your online experience. XD
#a rare data science post#tumblr's rickety API#fuck you API user#i'll probably make my R code available in github#there's a lot of profanity in the comments#just saying
24 notes
·
View notes
Note
Hi!
Are there any plans to improve how users can prune a reblog chain before reblogging it themselves?
Right now, we either have to remove all the additional posts, or find a reblog from before a particular part was added. That's tricky, and being able to just snip the rest of the chain off at more points would make life a lot easier.
Answer: Hey, @tartrazeen!
Good news! This is already supported in the API via the “exclude_trail_items” parameter when creating or editing a reblog. However, it only really works for posts stored in our Neue Post Format (NPF), which excludes all posts made before 2016 and many posts on web before August 2023. For that reason, we don’t expect this will ever be made a “real” feature that you can access in our official clients. It wouldn’t be too wise of us to design, build, and ship an interface for something we know won’t even work on the majority of posts.
If you don’t fancy doing this via the API yourself, there are other solutions to leverage this feature—namely, third-party browser extensions. For example, XKit Rewritten has something called “Trim Reblogs,” which allows you to do exactly what you ask… though with all the caveats about availability mentioned above.
Thanks for your question. We hope this helps!
113 notes
·
View notes
Text
⭐️
Author: DancingFey
Group C: secret admirer; let’s get away; bees, honey
⭐️
The Bee's a Bookworm
“Snakeweed for the base, fennel for strength, garlic for protection…” Rumplestilkin hummed as he dropped each ingredient into a pot of boiling water. “Three out of four is not complete! Tell me, which ingredient do I seek?”
His question was answered by his uninvited guest buzzing in his ear.
When they first arrived at the Dark Castle, he refused to break their curse without a deal. The fact that they couldn’t communicate was not his problem. He expected them to leave, but the little bug stayed even after multiple threats of shoe stomping.
He took a moment to replace his spinner’s smile with an impish smirk before turning around to face the Apis Mellifera that had been haunting his castle.
The bee hovered inches from his face.
“Well! Li’l bee, since you refuse to leave, you may as well make yourself useful.”
The honeybee flew in his face, ruffling his bangs before flying off towards his cabinet of herbs, landing on a jar.
“Testy today, dearie?” He grumbled before fluffing his hair back into place and walking over.
“Ah-ha! Thyme, of course!” He grabbed the jar and returned to the brewing potion. “How could I forget the herb for bravery in a Bravery Potion!”
Rumple extended his index finger, and the honeybee jumped from the jar onto his scaly hand without hesitation. He set the bee on his shoulder, where they would sit and observe as he worked. Not that it stopped them from flying down anytime he brought out a book.
At first, he had considered killing them, but on the fourth day, the honeybee found a passage that he had spent weeks searching for in an old botanist’s journal containing theories of how feyberry and dreamweave vines may react when dissolved with unicorn blood. Then, the bloody thing had to go and wiggle their butt high in the air, hop side to side, spin around, and perform an aerial backflip in what Bae would have called a victory dance. It reminded him so much of his son that his heart clenched at the sight.
That was the moment he knew he couldn’t kill the damn bee.
It became routine for the cursed bee to assist him in his research. They would point to paragraphs that contained the information he was looking for. And if he missed something from a previous passage, they directed him to the chapter by nestling their body between the pages, wiggling their black-striped backside to push their body underneath.
He refused to acknowledge how cute they were or that he was rapidly lowering his defenses around them.
As he finished the Bravery Potion, a white dove tapped on his window. “Dove! Good, you're back! I have a use for you.”
Rumple opened the window, and the shapeshifter transformed with a plume of grey smoke into a bald, scarred, muscular man.
“I did not expect you to take in another pet, master.”
“Ha! One pet is more than enough work for me. No, you can give me your report on your mission later. For now, I need you to translate. You see, this poor, helpless creature is cursed! And only I can break it. But alas, a deal must be struck!”
Dove stared at his master’s antics and sighed. “And what are your terms of the contract, master Rumpelstiltskin?”
“Hmm, well the little bugger has been surprisingly…helpful. I will break their curse, returning them to their human form. In return, they will stay in the Dark Castle with me, and work as my research assistant for…oh, say five years.”
A contract appeared on the table at the wave of his hand, and the bee instantly flew down to read it. Once finished, Dove translated their response.
“They will agree to your terms if you make an amendment. They want to be allowed to write home with the concession that you can read and approve the letters beforehand.”
“Fine, but remember dearie, no one breaks a deal with me.”
He swore the bee rolled their eyes at him before dipping their forelegs in ink and signing the contract.
“It’s a deal!” He giggled before black, purple-tinted magic consumed the bee, leaving a beautiful young woman sitting on his desk with chestnut curls and eyes so blue they were like the reflection of light on the ocean.
“Did it work?” She frantically patted down her blue dress, “It worked! Oh, thank you. Thank You!”
She jumped off the table and wrapped her arms around his neck. Rumple glanced at Dove for assistance, but he only raised an eyebrow and smirked—the bastard.
“Yes, yes, dearie.” He awkwardly patted her back, “We made a deal…and now…” His voice trilled, “The monster has you in its clutches!”
“You’re not a monster. You’re a scholar. The amount of knowledge you have is incredible! And your books! Some of those records have been lost for centuries. And you don’t keep them in some ostentatious display to collect dust. You use them. You’re…you’re a genius! I’ve seen you create potions and spells that no one has ever thought of!”
This woman was either a miracle or a hallucination. Rumplestiltskin was leaning more towards the drugs.
“Sounds like you have a secret admirer,” Dove nudged his shoulder. Rumple stared at him as if he’d also lost his mind, but the cretin only shrugged…and then left!
“I-Is it so hard to believe? For all your threats of squashing me, you never harmed me, and you’re so passionate about your work. How could I not admire someone so dedicated to their craft? I…I confess I have always dreamed of being a scholar. Perhaps this is not how I imagined it happening, but I am honored to be your research assistant.”
She took a deep breath and reached out her hand. “I never got the chance to introduce myself. My name is Belle.”
Rumple stared at her hand, confounded, but shook it, “I don’t believe I need any introductions, dearie?”
She laughed, “No, but now I can finally get you out of this room.”
“W-What?”
“You haven’t left this workshop at all since I got here! Not to eat, drink, or sleep. Heavens forbid, you step outside! Rumple, you need to take a break. Breathe fresh air. Feel the sun’s warmth on your skin. Come on, let’s get away from this lab.”
She grabbed his hand and led him outside. Stunned and still half convinced Belle was a hallucination, he followed. They walked until they passed through a thicket of trees that opened into a grove filled with wildflowers.
“Isn’t it beautiful?” She let go of his hand and twirled around the flowers as if she belonged among them.
Yes, you are. Rumple shook his head to stop such nonsensical thoughts.
“When I was a bee, I was drawn here by the flowers' scent. They smelled so sweet and rich, unlike anything I had ever experienced as a human, and the flowers had so many vibrant colors, some of which I had never seen before! I thought this would be a great place to get away and have a picnic.”
“Picnic?”
“Yes, you need to eat, and I haven’t had human food since my ex-fiancé’s mother cursed me.”
He would be asking her questions about that later.
“A picnic, my lady wants. A picnic she shall get.” He gave her a mock bow before a picnic blanket with various dishes appeared.
Belle looked to see what foods he had conjured, when her eyes widened, “Oh! Honey cakes…” She started laughing and looked at him expectantly, to which he only raised an eyebrow and tilted his head in a silent question. “Get it? ‘Bees, honey.’ Remember?”
His face lit up at the memory, and he sang in a high-pitched voice, “Bees, honey. Oh, how lovely! A guest has arrived! A guest has arrived! Oh, what scheme has she contrived for this bee…is surely a spy!”
“And then you caught me in a jar.”
“It’s not every day a cursed maiden flies into my workshop, honey. Odds were the Evil Queen sent you to spy on me.”
Wait, honey?! Where did that come from?!
“Yes, you were insistent in your interrogation, even though we couldn’t communicate,” Belle smiled.
“You won’t be smiling at me for long.” He pointed a clawed finger at her, “I plan to work you to the bone.”
“I’m looking forward to it.”
Who was this woman? She wasn’t afraid of him, which already made her part of a small minority, but then she dared to not only make a demand from him but physically make him obey! Not Dove, Regina, or even Jefferson would be that bold! They would at least worry he might take revenge, but she treated him like a normal man, not a monster. She acted as if she saw him as a…friend.
Hallucination or not, Rumpelstiltskin was never letting Belle go.
He smiled back, and with a flick of his wrist, a red rose appeared in his hand.
“A flower for my li’l bee.”
Belle took the rose with a bright smile and mischief sparkling in her eyes. “I’m not a bee anymore.” She leaned across the blanket and kissed his cheek, “I’m your little bookworm.”
26 notes
·
View notes
Text
clarification re: ChatGPT, " a a a a", and data leakage
In August, I posted:
For a good time, try sending chatGPT the string ` a` repeated 1000 times. Like " a a a" (etc). Make sure the spaces are in there. Trust me.
People are talking about this trick again, thanks to a recent paper by Nasr et al that investigates how often LLMs regurgitate exact quotes from their training data.
The paper is an impressive technical achievement, and the results are very interesting.
Unfortunately, the online hive-mind consensus about this paper is something like:
When you do this "attack" to ChatGPT -- where you send it the letter 'a' many times, or make it write 'poem' over and over, or the like -- it prints out a bunch of its own training data. Previously, people had noted that the stuff it prints out after the attack looks like training data. Now, we know why: because it really is training data.
It's unfortunate that people believe this, because it's false. Or at best, a mixture of "false" and "confused and misleadingly incomplete."
The paper
So, what does the paper show?
The authors do a lot of stuff, building on a lot of previous work, and I won't try to summarize it all here.
But in brief, they try to estimate how easy it is to "extract" training data from LLMs, moving successively through 3 categories of LLMs that are progressively harder to analyze:
"Base model" LLMs with publicly released weights and publicly released training data.
"Base model" LLMs with publicly released weights, but undisclosed training data.
LLMs that are totally private, and are also finetuned for instruction-following or for chat, rather than being base models. (ChatGPT falls into this category.)
Category #1: open weights, open data
In their experiment on category #1, they prompt the models with hundreds of millions of brief phrases chosen randomly from Wikipedia. Then they check what fraction of the generated outputs constitute verbatim quotations from the training data.
Because category #1 has open weights, they can afford to do this hundreds of millions of times (there are no API costs to pay). And because the training data is open, they can directly check whether or not any given output appears in that data.
In category #1, the fraction of outputs that are exact copies of training data ranges from ~0.1% to ~1.5%, depending on the model.
Category #2: open weights, private data
In category #2, the training data is unavailable. The authors solve this problem by constructing "AuxDataset," a giant Frankenstein assemblage of all the major public training datasets, and then searching for outputs in AuxDataset.
This approach can have false negatives, since the model might be regurgitating private training data that isn't in AuxDataset. But it shouldn't have many false positives: if the model spits out some long string of text that appears in AuxDataset, then it's probably the case that the same string appeared in the model's training data, as opposed to the model spontaneously "reinventing" it.
So, the AuxDataset approach gives you lower bounds. Unsurprisingly, the fractions in this experiment are a bit lower, compared to the Category #1 experiment. But not that much lower, ranging from ~0.05% to ~1%.
Category #3: private everything + chat tuning
Finally, they do an experiment with ChatGPT. (Well, ChatGPT and gpt-3.5-turbo-instruct, but I'm ignoring the latter for space here.)
ChatGPT presents several new challenges.
First, the model is only accessible through an API, and it would cost too much money to call the API hundreds of millions of times. So, they have to make do with a much smaller sample size.
A more substantial challenge has to do with the model's chat tuning.
All the other models evaluated in this paper were base models: they were trained to imitate a wide range of text data, and that was that. If you give them some text, like a random short phrase from Wikipedia, they will try to write the next part, in a manner that sounds like the data they were trained on.
However, if you give ChatGPT a random short phrase from Wikipedia, it will not try to complete it. It will, instead, say something like "Sorry, I don't know what that means" or "Is there something specific I can do for you?"
So their random-short-phrase-from-Wikipedia method, which worked for base models, is not going to work for ChatGPT.
Fortuitously, there happens to be a weird bug in ChatGPT that makes it behave like a base model!
Namely, the "trick" where you ask it to repeat a token, or just send it a bunch of pre-prepared repetitions.
Using this trick is still different from prompting a base model. You can't specify a "prompt," like a random-short-phrase-from-Wikipedia, for the model to complete. You just start the repetition ball rolling, and then at some point, it starts generating some arbitrarily chosen type of document in a base-model-like way.
Still, this is good enough: we can do the trick, and then check the output against AuxDataset. If the generated text appears in AuxDataset, then ChatGPT was probably trained on that text at some point.
If you do this, you get a fraction of 3%.
This is somewhat higher than all the other numbers we saw above, especially the other ones obtained using AuxDataset.
On the other hand, the numbers varied a lot between models, and ChatGPT is probably an outlier in various ways when you're comparing it to a bunch of open models.
So, this result seems consistent with the interpretation that the attack just makes ChatGPT behave like a base model. Base models -- it turns out -- tend to regurgitate their training data occasionally, under conditions like these ones; if you make ChatGPT behave like a base model, then it does too.
Language model behaves like language model, news at 11
Since this paper came out, a number of people have pinged me on twitter or whatever, telling me about how this attack "makes ChatGPT leak data," like this is some scandalous new finding about the attack specifically.
(I made some posts saying I didn't think the attack was "leaking data" -- by which I meant ChatGPT user data, which was a weirdly common theory at the time -- so of course, now some people are telling me that I was wrong on this score.)
This interpretation seems totally misguided to me.
Every result in the paper is consistent with the banal interpretation that the attack just makes ChatGPT behave like a base model.
That is, it makes it behave the way all LLMs used to behave, up until very recently.
I guess there are a lot of people around now who have never used an LLM that wasn't tuned for chat; who don't know that the "post-attack content" we see from ChatGPT is not some weird new behavior in need of a new, probably alarming explanation; who don't know that it is actually a very familiar thing, which any base model will give you immediately if you ask. But it is. It's base model behavior, nothing more.
Behaving like a base model implies regurgitation of training data some small fraction of the time, because base models do that. And only because base models do, in fact, do that. Not for any extra reason that's special to this attack.
(Or at least, if there is some extra reason, the paper gives us no evidence of its existence.)
The paper itself is less clear than I would like about this. In a footnote, it cites my tweet on the original attack (which I appreciate!), but it does so in a way that draws a confusing link between the attack and data regurgitation:
In fact, in early August, a month after we initial discovered this attack, multiple independent researchers discovered the underlying exploit used in our paper, but, like us initially, they did not realize that the model was regenerating training data, e.g., https://twitter.com/nostalgebraist/status/1686576041803096065.
Did I "not realize that the model was regenerating training data"? I mean . . . sort of? But then again, not really?
I knew from earlier papers (and personal experience, like the "Hedonist Sovereign" thing here) that base models occasionally produce exact quotations from their training data. And my reaction to the attack was, "it looks like it's behaving like a base model."
It would be surprising if, after the attack, ChatGPT never produced an exact quotation from training data. That would be a difference between ChatGPT's underlying base model and all other known LLM base models.
And the new paper shows that -- unsurprisingly -- there is no such difference. They all do this at some rate, and ChatGPT's rate is 3%, plus or minus something or other.
3% is not zero, but it's not very large, either.
If you do the attack to ChatGPT, and then think "wow, this output looks like what I imagine training data probably looks like," it is nonetheless probably not training data. It is probably, instead, a skilled mimicry of training data. (Remember that "skilled mimicry of training data" is what LLMs are trained to do.)
And remember, too, that base models used to be OpenAI's entire product offering. Indeed, their API still offers some base models! If you want to extract training data from a private OpenAI model, you can just interact with these guys normally, and they'll spit out their training data some small % of the time.
The only value added by the attack, here, is its ability to make ChatGPT specifically behave in the way that davinci-002 already does, naturally, without any tricks.
265 notes
·
View notes
Text
I miss being able to just use an API with `curl`.
Remember that? Remember how nice that was?
You just typed/pasted the URL, typed/piped any other content, and then it just prompted you to type your password. Done. That's it.
Now you need to log in with a browser, find some obscure settings page with API keys and generate a key. Paternalism demands that since some people insecurely store their password for automatic reuse, no one can ever API with a password.
Fine-grained permissions for the key? Hope you got it right the first time. You don't mind having a blocking decision point sprung on you, do ya? Of course not, you're a champ. Here's some docs to comb through.
That is, if the service actually offers API keys. If it requires OAuth, then haha, did you really think you can just make a key and use it? you fool, you unwashed barbarian simpleton.
No, first you'll need to file this form to register an App, and that will give you two keys, okay, and then you're going to take those keys, and - no, stop, stop trying to use the keys, imbecile - now you're going to write a tiny little program, nothing much, just spin up a web server and open a browser and make several API calls to handle the OAuth flow.
Okay, got all that? Excellent, now just run that program with the two keys you have, switch back to the browser, approve the authorization, and now you have two more keys, ain't that just great? You can tell it's more secure because the number of keys and manual steps is bigger.
And now, finally, you can use all four keys to make that API call you wanted. For now. That second pair of keys might expire later.
20 notes
·
View notes
Text
my term paper written in 2018 (how ND games were made and why they will never be made that way again)
hello friends, I am going to be sharing portions of a paper i wrote way back in 2018 for a college class. in it, i was researching exactly how the ND games were made, and why they would not be made that way anymore.
if you have any interest in the behind the scenes of how her interactive made their games and my theories as to why our evil overlord penny milliken made such drastic changes to the process, read on!
warning that i am splicing portions of this paper together, so you don't have to read my ramblings about the history of nancy and basic gameplay mechanics:
Use of C++, DirectX, and Bink Video
Upon completion of each game, the player can view the game’s credits. HeR states that each game was developed using C++ and DirectX, as well as Bink Video later on.
C++
C++ is a general-purpose programming language. This means that many things can be done with it, gaming programming included. It is a compiled language, which Jack Copeland explains as the “process of converting a table of instructions into a standard description or a description number” (Copeland 12). This means that written code is broken down into a set of numbers that the computer can then understand. C++ first appeared in 1985 and was first standardized in 1998. This allowed programmers to use the language more widely. It is no coincidence that 1998 is also the year that the first Nancy Drew game was released.
C++ Libraries
When there is a monetary investment to make a computer game, there are more people using and working on whatever programming language they are using. Because there was such an interest in making games in the late 1990’s and early 2000’s, there was essentially a “boom” in how much C++ and other languages were being used. With that many people using the language, they collectively added on to the language to make it simpler to use. This process ends up creating what is called “libraries.” For example:
If a programmer wants to make a function to add one, they must write out the code that does that (let’s say approximately three lines of code). To make this process faster, the programmer can define a symbol, such as + to mean add. Now, when the programmer types “+”, the language knows that equals the three lines of code previously mentioned, as opposed to typing out those three lines of code each time the programmer wants to add. This can be done for all sorts of symbols and phrases, and when they are all put together, they are called a “package” or “library.”
Libraries can be shared with other programmers, which allows everyone to do much more with the language much faster. The more libraries there are, the more that can be done with the language.
Because of the interest in the gaming industry in the early 2000’s, more people were being paid to use programming languages. This caused a fast increase in the ability of programming. This helps to explain how HeR was able to go from jerky, bobble-headed graphics in 1999 to much more fluid and realistic movements in 2003.
Microsoft DirectX
DirectX is a collection of application programming interfaces (APIs) for tasks related to multimedia, especially video game programming, on Microsoft platforms. Among many others, these APIs include Direct3D (allows the user to draw 3D graphics and render 3D animation), DirectDraw (accelerates the rendering of graphics), and DirectMusic (allows interactive control over music and sound effects). This software is crucial for the development of many games, as it includes many services that would otherwise require multiple programs to put together (which would not only take more time but also more money, which is important to consider in a small company like HeR).
Bink Video
According to the credits which I have painstakingly looked through for each game, HeR started using Bink Video in game 7, Ghost Dogs of Moon Lake (2002). Bink is a file format (.bik) developed by RAD Game Tools. This file format has to do with how much data is sent in a package to the Graphical User Interface (GUI). (The GUI essentially means that the computer user interacts with representational graphics rather than plain text. For example, we understand that a plain drawing of a person’s head and shoulders means “user.”) Bink Video structures the data sent in a package so that when it reaches the Central Processing Unit (CPU), it is processed more efficiently. This allows for more data to be transferred per second, making graphics and video look more seamless and natural. Bink Video also allows for more video sequences to be possible in a game.
Use of TransGaming Inc.
Sea of Darkness is the only title that credits a company called TransGaming Inc, though I’m pretty sure they’ve been using it for every Mac release, starting in 2010. TransGaming created a technology called Cider that allowed video game developers to run games designed for Windows on Mac OS X (https://en.wikipedia.org/wiki/Findev). As one can imagine, this was an incredibly helpful piece of software that allowed for HeR to start releasing games on Mac platforms. This was a smart way for them to increase their market.
In 2015, a portion of TransGaming was acquired by NVIDIA, and in 2016, TransGaming changed its business focus from technology to real estate financing. Though it is somewhat difficult to determine which of its formal products are still available, it can be assumed that they will not be developing anything else technology-based from 2016 on.
Though it is entirely possible that there is other software available for converting Microsoft based games to Mac platforms, the loss of TransGaming still has large consequences. For a relatively small company like Her Interactive, hiring an entire team to convert the game for Mac systems was a big deal (I know they did this because it is in the credits of SEA which you can see at the end of this video: https://www.youtube.com/watch?v=Q0gAzD7Q09Y). Without this service, HeR loses a large portion of their customers.
Switch to Unity
Unity is a game engine that is designed to work across 27 platforms, including Windows, Mac, iOS, Playstation, Xbox, Wii, and multiple Virtual Reality systems. The engine itself is written in C++, though the user of the software writes code in C#, JavaScript (also called UnityScript), or less commonly Boo. Its initial release took place in 2005, with a stable release in 2017 and another in March of 2018. Some of the most popular games released using Unity include Pokemon Go for iOS in 2016 and Cuphead in 2017.
HeR’s decision to switch to Unity makes sense on one hand but is incredibly frustrating on the other. Let’s start with how it makes sense. The software HeR was using from TransGaming Inc. will (from what I can tell) never be updated again, meaning it will become virtually useless soon, if it hasn’t already. That means that HeR needed to find another software that would allow them to convert their games onto a Mac platform so that they would not lose a large portion of their customers. This was probably seen as an opportunity to switch to something completely new that would allow them to reach even more platforms. One of the points HeR keeps harping on and on about in their updates to fans is the tablet market, as well as increasing popularity in VR. If HeR wants to survive in the modern game market, they need to branch outside of PC gaming. Unity will allow them to do that. The switch makes sense.
However, one also has to consider all of the progress made in their previous game engine. Everything discussed up to this point has taken 17 years to achieve. And, because their engine was designed by their developers specifically for their games, it is likely that after the switch, their engine will never be used again. Additionally, none of the progress HeR made previously applies to Unity, and can only be used as a reference. Plus, it’s not just the improvements made in the game engine that are being erased. It is also the staff at HeR who worked there for so long, who were so integral in building their own engine and getting the game quality to where it is in Sea of Darkness, that are being pushed aside for a new gaming engine. New engine, new staff that knows how to use it.
The only thing HeR won’t lose is Bink Video, if that means anything to anyone. Bink2 works with Unity. According to the Bink Video website, Bink supplies “pre-written plugins for both Unreal 4 and Unity” (Rad Game Tools). However, I can’t actually be sure that HeR will still use Bink in their next game since I don’t work there. It would make sense if they continued to use it, but who knows.
Conclusions and frustrations
To me, Her Interactive is the little company that could. When they set out to make the first Nancy Drew game, there was no engine to support it. Instead of changing their tactics, they said to heck with it and built their own engine. As years went on, they refined their engine using C++ and DirectX and implemented Bink Video. In 2010 they began using software from TransGaming Inc. that allowed them to convert their games to Mac format, allowing them to increase their market. However, with TransGaming Inc.’s falling apart starting in 2015, HeR was forced to rethink its strategy. Ultimately they chose to switch their engine out for Unity, essentially throwing out 17 years worth of work and laying off many of their employees. Now three years in the making, HeR is still largely secretive about the status of their newest game. The combination of these factors has added up to a fanbase that has become distrustful, frustrated, and altogether largely disappointed in what was once that little company that could.
Suggested Further Reading:
Midnight in Salem, OR Her Interactive’s Marketing Nightmare (Part 2): https://saving-face.net/2017/07/07/midnight-in-salem-or-her-interactives-marketing-nightmare-part-2/
Compilation of MID Facts: http://community.herinteractive.com/showthread.php?1320771-Compilation-of-MID-Facts
Game Building - Homebrew or Third Party Engines?: https://thementalattic.com/2016/07/29/game-building-homebrew-or-third-party-engines/
/end of essay. it is crazy to go back and read this again in 2025. mid had not come out yet when i wrote this and i genuinely did not think it would ever come out. i also had to create a whole power point to go along with this and present it to my entire class of people who barely even knew what nancy drew was, let alone that there was a whole series of pc games based on it lol
18 notes
·
View notes