#Model-Driven Apps
Explore tagged Tumblr posts
Text
🤖 Discover how Xrm.Copilot is revolutionizing Dynamics 365 CE with AI! Learn about its key capabilities, integration with the Xrm API, and how developers can build smarter model-driven apps. Boost CRM productivity with conversational AI. #Dynamics365 #Copilot #PowerPlatform #XrmAPI #CRM #AI
#AI in Dynamics#Azure OpenAI.#CRM AI#CRM Automation#dataverse#Dynamics 365 CE#Low-Code AI#Microsoft Copilot#Model-Driven Apps#Power Platform#Xrm API#Xrm.Copilot
0 notes
Text
#custom ai model development#mobile app development#ai-driven hardware development#end-to-end product development#best sourcing specialists in india#principles of ui ux design#manufacturing support#iot embedded systems
0 notes
Text
Model-Driven vs. Canvas Apps in Dynamics 365: Choosing the Right Approach
In the evolving world of Microsoft Dynamics 365, the choice between Model-Driven Apps and Canvas Apps plays a crucial role in shaping an organization's digital transformation strategy. Each app type offers unique customization capabilities, integration features, and user experiences, making it essential to understand their differences before implementation.
Model-Driven Apps: Structured & Data-Centric
🔹 Data-Centric Design – Built around Common Data Service (CDS), Model-Driven Apps offer a structured UI based on entities, attributes, and relationships. 🔹 Metadata-Driven Customization – Developers can rapidly configure forms, views, and workflows, ensuring a consistent business process flow. 🔹 Seamless Integration – These apps integrate effortlessly with Dynamics 365 modules, APIs, and external systems, enabling smooth workflow automation.
💡 Use Cases: ✅ Sales Management – Streamline lead tracking, opportunity management, and revenue forecasting. ✅ Customer Service – Empower support teams with structured case management, SLA adherence, and service efficiency.
Canvas Apps: Flexibility & Visual Customization
🔹 Visual Design Paradigm – Create pixel-perfect UI with a drag-and-drop editor, making app development more user-friendly. 🔹 Formula-Based Logic – Use Power Apps formulas to implement data processing, validation, and automation without extensive coding. 🔹 Device Independence – Build responsive apps for different devices using adaptive layouts and dynamic controls.
💡 Use Cases: ✅ Expense Reporting – Automate receipt scanning, AI-based data extraction, and submission workflows. ✅ Field Service Operations – Provide mobile access to work orders, customer details, and geolocation-based services.
0 notes
Text
i think there’s something to be said about how the gig economy makes things ostensibly more convenient but also worse. and not just like, doordash guys take too long to get to you so your food is cold. but because the business model is centered around a million people doing work without any familiarity with what theyre doing and decentralized from the businesses they’re working with, you get service that’s being reinvented from scratch every time it’s purchased.
it happens all the time that I’ll order an uber and when they pick me up, they’ll just stop in the middle of the street with their hazards on, making me dodge traffic to get to them and pissing off the cars around them. and then I’ll get in the car and chat with the driver and find out they’re actually from two counties over and they’ve never driven here before, so they don’t know where parking is or whether they’re heading to a wide open parking lot or a busy downtown. and then you start to realize that they’re not being a dick, they’re just given as little information as possible every time they pick up a ride so they have to just guess how and where to pick up a passenger. and since they’re paid by ride, they’re incentivized to pick you up as fast as possible. and all the people who cared about finding a safe place to pick you up quit the app or stopped doing that so all you’re left with is the pissed off cockroach motherfuckers.
and then you see that this happens with every fucking app. doordash sucks because you pay 8 million dollars for delivery and you still have to hike half a mile to find the guy because he got lost in your apartment complex. Instacart sucks because the guy picking your groceries couldn’t care less about getting ripe fruit and replaces your heavy cream with shaving cream. customer support for all this sucks because the guy helping you can’t do anything more than offer you $5 credit, beg for your forgiveness, and hope you get out of the queue fast enough for him to go to the bathroom. because all of them aren’t given enough time to do a good job or enough money to care.
and every time a gig worker makes the experience suck for you, it’s a rational decision. they’re evaluating the money they’re being paid and if it’s worth getting paid less to do a good job, and correctly deciding that it isn’t. so you can’t even get mad, because you’d do it too. and so the company manages to pass on its race to the bottom to its lowest-paid employees.
#there was a post i read once about how companies do this because it effectively insulates them from customers anger#because either you get mad at the person in front of you or you realize that it’s not their fault#and then what are you gonna do? complain to customer service about how customer service doesn’t get paid enough? get real#i wish i could remember exactly what it called the phenomenon
10K notes
·
View notes
Text
Use OptionSet Wrapper component to show color-coded options on the form – Model-driven apps/ Dynamics 365
We can make use of the OptionSet Wrapper component to show color-coded options on the form for the choices or option set fields. Below we have added the Option Set wrapper component to the Rating field of the lead in the Lead (Main) form. We can see the field added but do not see any colors for it. This is because it uses the color defined for that column, which wasn’t defined out of the box…
0 notes
Text
Tesla accused of hacking odometers to weasel out of warranty repairs

I'm on a 20+ city book tour for my new novel PICKS AND SHOVELS. Catch me at NEW ZEALAND'S UNITY BOOKS in AUCKLAND on May 2, and in WELLINGTON on May 3. More tour dates (Pittsburgh, PDX, London, Manchester) here.
A lawsuit filed in February accuses Tesla of remotely altering odometer values on failure-prone cars, in a bid to push these lemons beyond the 50,000 mile warranty limit:
https://www.thestreet.com/automotive/tesla-accused-of-using-sneaky-tactic-to-dodge-car-repairs
The suit was filed by a California driver who bought a used Tesla with 36,772 miles on it. The car's suspension kept failing, necessitating multiple servicings, and that was when the plaintiff noticed that the odometer readings for his identical daily drive were going up by ever-larger increments. This wasn't exactly subtle: he was driving 20 miles per day, but the odometer was clocking 72.35 miles/day. Still, how many of us monitor our daily odometer readings?
In short order, his car's odometer had rolled over the 50k mark and Tesla informed him that they would no longer perform warranty service on his lemon. Right after this happened, the new mileage clocked by his odometer returned to normal. This isn't the only Tesla owner who's noticed this behavior: Tesla subreddits are full of similar complaints:
https://www.reddit.com/r/RealTesla/comments/1ca92nk/is_tesla_inflating_odometer_to_show_more_range/
This isn't Tesla's first dieselgate scandal. In the summer of 2023, the company was caught lying to drivers about its cars' range:
https://pluralistic.net/2023/07/28/edison-not-tesla/#demon-haunted-world
Drivers noticed that they were getting far fewer miles out of their batteries than Tesla had advertised. Naturally, they contacted the company for service on their faulty cars. Tesla then set up an entire fake service operation in Nevada that these calls would be diverted to, called the "diversion team." Drivers with range complaints were put through to the "diverters" who would claim to run "remote diagnostics" on their cars and then assure them the cars were fine. They even installed a special xylophone in the diversion team office that diverters would ring every time they successfully deceived a driver.
These customers were then put in an invisible Tesla service jail. Their Tesla apps were silently altered so that they could no longer book service for their cars for any reason – instead, they'd have to leave a message and wait several days for a callback. The diversion center racked up 2,000 calls/week and diverters were under strict instructions to keep calls under five minutes. Eventually, these diverters were told that they should stop actually performing remote diagnostics on the cars of callers – instead, they'd just pretend to have run the diagnostics and claim no problems were found (so if your car had a potentially dangerous fault, they would falsely claim that it was safe to drive).
Most modern cars have some kind of internet connection, but Tesla goes much further. By design, its cars receive "over-the-air" updates, including updates that are adverse to drivers' interests. For example, if you stop paying the monthly subscription fee that entitles you to use your battery's whole charge, Tesla will send a wireless internet command to your car to restrict your driving to only half of your battery's charge.
This means that your Tesla is designed to follow instructions that you don't want it to follow, and, by design, those instructions can fundamentally alter your car's operating characteristics. For example, if you miss a payment on your Tesla, it can lock its doors and immobilize itself, then, when the repo man arrives, it will honk its horn, flash its lights, back out of its parking spot, and unlock itself so that it can be driven away:
https://tiremeetsroad.com/2021/03/18/tesla-allegedly-remotely-unlocks-model-3-owners-car-uses-smart-summon-to-help-repo-agent/
Some of the ways that your Tesla can be wirelessly downgraded (like disabling your battery) are disclosed at the time of purchase. Others (like locking you out and summoning a repo man) are secret. But whether disclosed or secret, both kinds of downgrade depend on the genuinely bizarre idea that a computer that you own, that is in your possession, can be relied upon to follow orders from the internet even when you don't want it to. This is weird enough when we're talking about a set-top box that won't let you record a TV show – but when we're talking about a computer that you put your body into and race down the road at 80mph inside of, it's frankly terrifying.
Obviously, most people would prefer to have the final say over how their computers work. I mean, maybe you trust the manufacturer's instructions and give your computer blanket permission to obey them, but if the manufacturer (or a hacker pretending to be the manufacturer, or a government who is issuing orders to the manufacturer) starts to do things that are harmful to you (or just piss you off), you want to be able to say to your computer, "OK, from now on, you take orders from me, not them."
In a state of nature, this is how computers work. To make a computer ignore its owner in favor of internet randos, the manufacturer has to build in a bunch of software countermeasures to stop you from reconfiguring or installing software of your choosing on it. And sure, that software might be able to withstand the attempts of normies like you and me to bypass it, but given that we'd all rather have the final say over how our computers work, someone is gonna figure out how to get around that software. I mean, show me a 10-foot fence and I'll show you an 11-foot ladder, right?
To stop that from happening, Congress passed the 1998 Digital Millennium Copyright Act. Despite the word "copyright" appearing in the name of the law, it's not really about defending copyright, it's about defending business models. Under Section 1201 of the DMCA, helping someone bypass a software lock is a felony punishable by a five-year prison sentence and a $500,000 fine (for a first offense). That's true whether or not any copyright infringement takes place.
So if you want to modify your Tesla – say, to prevent the company from cheating your odometer – you have to get around a software lock, and that's a felony. Indeed, if any manufacturer puts a software lock on its product, then any changes that require disabling or bypassing that lock become illegal. That's why you can't just buy reliable third-party printer ink – reverse-engineering the "is this an original HP ink cartridge?" program is a literal crime, even though using non-HP ink in your printer is absolutely not a copyright violation. Jay Freeman calls this effect "felony contempt of business model."
Thus we arrive at this juncture, where every time you use a product or device or service, it might behave in a way that is totally unlike the last time you used it. This is true whether you own, lease or merely interact with a product. The changes can be obvious, or they can be subtle to the point of invisibility. And while manufacturers can confine their "updates" to things that make the product better (for example, patching security vulnerabilities), there's nothing to stop them from using this uninspectable, non-countermandable veto over your devices' functionality to do things that harm you – like fucking with your odometer.
Or, you know, bricking your car. The defunct EV maker Fisker – who boasted that it made "software-based cars" – went bankrupt last year and bricked the entire fleet of unsold cars:
https://pluralistic.net/2024/10/10/software-based-car/#based
I call this ability to modify the underlying functionality of a product or service for every user, every time they use it, "twiddling," and it's a major contributor to enshittification:
https://pluralistic.net/2023/02/19/twiddler/
Enshittification's observable symptoms follow a predictable pattern: first, a company makes things good for its users, while finding ways to lock them in. Then, once it knows the users can't easily leave, the company makes things worse for end-users in order to deliver value to business customers. Once these businesses are locked in, the company siphons value away from them, too, until the product or service is a pile of shit, that we still can't leave:
https://pluralistic.net/2025/02/26/ursula-franklin/#franklinite
Twiddling is key to enshittification: it's the method by which value is shifted from end-users to business customers, and from business customers to the platform. Twiddling is the "switch" in enshittification's series of minute, continuous bait-and-switches. The fact that DMCA 1201 makes it a crime to investigate systems with digital locks makes the modern computerized device a twiddler's playground. Sure, a driver might claim that their odometer is showing bad readings, but they can't dump their car's software and identify the code that is changing the odometer.
This is what I mean by "demon-haunted computers": a computer is "demon-haunted" if it is designed to detect when it is under scrutiny, and, when it senses a hostile observer, it changes its behavior to the innocuous, publicly claimed factory defaults:
https://pluralistic.net/2024/01/18/descartes-delenda-est/#self-destruct-sequence-initiated
But as soon as the observer goes away, the computer returns to its nefarious ways. This is exactly what happened with Dieselgate, when VW used software that detected the test-suite run by government emissions inspectors, and changed the engine's characteristics when it was under their observation. But once the car was back on the road, it once again began emitting toxic gas at levels that killed killed dozens of people and sickened thousands more:
https://www.nytimes.com/2015/09/29/upshot/how-many-deaths-did-volkswagens-deception-cause-in-us.html
Cars are among the most demon-haunted products we use on a daily basis. They are designed from the chassis up to do things that are harmful to their owners, from stealing our location data so it can be sold to data-brokers, to immobilizing themselves if you miss a payment, to downgrading themselves if you stop paying for a "subscription," to ratting out your driving habits to your insurer:
https://pluralistic.net/2023/07/24/rent-to-pwn/#kitt-is-a-demon
These are the "legitimate" ways that cars are computers that ignore their owners' orders in favor of instructions they get from the internet. But once a manufacturer arrogates that power to itself, it is confronted with a tempting smorgasbord of enshittificatory gambits to defraud you, control you, and gaslight you. Now, perhaps you could wield this power wisely, because you are in possession of the normal human ration of moral consideration for others, to say nothing of a sense of shame and a sense of honor.
But while corporations are (legally) people, they are decidedly not human. They are artificial lifeforms, "intellects vast and cool and unsympathetic" (as HG Wells said of the marauding aliens in War of the Worlds):
https://pluralistic.net/2025/04/14/timmy-share/#a-superior-moral-justification-for-selfishness
These alien invaders are busily xenoforming the planet, rendering it unfit for human habitation. Laws that ban reverse-engineering are a devastating weapon that corporations get to use in their bid to subjugate and devour the human race.
The US isn't the only country with a law like Section 1201 of the DMCA. Over the past 25 years, the US Trade Representative has arm-twisted nearly every country in the world into passing laws that are nearly identical to America's own disastrous DMCA. Why did countries agree to pass these laws? Well, because they had to, or the US would impose tariffs on them:
https://pluralistic.net/2025/03/03/friedmanite/#oil-crisis-two-point-oh
The Trump tariffs change everything, including this thing. There is no reason for America's (former) trading partners to continue to enforce the laws it passed to protect Big Tech's right to twiddle their citizens. That goes double for Tesla: rather than merely complaining about Musk's Nazi salutes, countries targeted by the regime he serves could retaliate against him, in a devastating fashion. By abolishing their anticircuvmention laws, countries around the world would legalize jailbreaking Teslas, allowing mechanics to unlock all the subscription features and software upgrades for every Tesla driver, as well as offering their own software mods. Not only would this tank Tesla stock and force Musk to pay back the loans he collateralized with his shares (loans he used to buy Twitter and the US predidency), it would also abolish sleazy gimmicks like hacking drivers' odometers to get out of paying for warranty service:
https://pluralistic.net/2025/03/08/turnabout/#is-fair-play
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2025/04/15/musklemons/#more-like-edison-amirite
Image: Steve Jurvetson (modified) https://commons.wikimedia.org/wiki/File:Tesla_Model_S_Indoors.jpg
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/deed.en
#pluralistic#tesla#demon-haunted cars#autoenshittification#fraud#odomoter fraud#automotive#dieselgate#elon musk#musk#enshittification#1201#dmca 1201#felony contempt of business model#repair#right to repair
3K notes
·
View notes
Text
On a blustery spring Thursday, just after midterms, I went out for noodles with Alex and Eugene, two undergraduates at New York University, to talk about how they use artificial intelligence in their schoolwork. When I first met Alex, last year, he was interested in a career in the arts, and he devoted a lot of his free time to photo shoots with his friends. But he had recently decided on a more practical path: he wanted to become a C.P.A. His Thursdays were busy, and he had forty-five minutes until a study session for an accounting class. He stowed his skateboard under a bench in the restaurant and shook his laptop out of his bag, connecting to the internet before we sat down.
Alex has wavy hair and speaks with the chill, singsong cadence of someone who has spent a lot of time in the Bay Area. He and Eugene scanned the menu, and Alex said that they should get clear broth, rather than spicy, “so we can both lock in our skin care.” Weeks earlier, when I’d messaged Alex, he had said that everyone he knew used ChatGPT in some fashion, but that he used it only for organizing his notes. In person, he admitted that this wasn’t remotely accurate. “Any type of writing in life, I use A.I.,” he said. He relied on Claude for research, DeepSeek for reasoning and explanation, and Gemini for image generation. ChatGPT served more general needs. “I need A.I. to text girls,” he joked, imagining an A.I.-enhanced version of Hinge. I asked if he had used A.I. when setting up our meeting. He laughed, and then replied, “Honestly, yeah. I’m not tryin’ to type all that. Could you tell?”
OpenAI released ChatGPT on November 30, 2022. Six days later, Sam Altman, the C.E.O., announced that it had reached a million users. Large language models like ChatGPT don’t “think” in the human sense—when you ask ChatGPT a question, it draws from the data sets it has been trained on and builds an answer based on predictable word patterns. Companies had experimented with A.I.-driven chatbots for years, but most sputtered upon release; Microsoft’s 2016 experiment with a bot named Tay was shut down after sixteen hours because it began spouting racist rhetoric and denying the Holocaust. But ChatGPT seemed different. It could hold a conversation and break complex ideas down into easy-to-follow steps. Within a month, Google’s management, fearful that A.I. would have an impact on its search-engine business, declared a “code red.”
Among educators, an even greater panic arose. It was too deep into the school term to implement a coherent policy for what seemed like a homework killer: in seconds, ChatGPT could collect and summarize research and draft a full essay. Many large campuses tried to regulate ChatGPT and its eventual competitors, mostly in vain. I asked Alex to show me an example of an A.I.-produced paper. Eugene wanted to see it, too. He used a different A.I. app to help with computations for his business classes, but he had never gotten the hang of using it for writing. “I got you,” Alex told him. (All the students I spoke with are identified by pseudonyms.)
He opened Claude on his laptop. I noticed a chat that mentioned abolition. “We had to read Robert Wedderburn for a class,” he explained, referring to the nineteenth-century Jamaican abolitionist. “But, obviously, I wasn’t tryin’ to read that.” He had prompted Claude for a summary, but it was too long for him to read in the ten minutes he had before class started. He told me, “I said, ‘Turn it into concise bullet points.’ ” He then transcribed Claude’s points in his notebook, since his professor ran a screen-free classroom.
Alex searched until he found a paper for an art-history class, about a museum exhibition. He had gone to the show, taken photographs of the images and the accompanying wall text, and then uploaded them to Claude, asking it to generate a paper according to the professor’s instructions. “I’m trying to do the least work possible, because this is a class I’m not hella fucking with,” he said. After skimming the essay, he felt that the A.I. hadn’t sufficiently addressed the professor’s questions, so he refined the prompt and told it to try again. In the end, Alex’s submission received the equivalent of an A-minus. He said that he had a basic grasp of the paper’s argument, but that if the professor had asked him for specifics he’d have been “so fucked.” I read the paper over Alex’s shoulder; it was a solid imitation of how an undergraduate might describe a set of images. If this had been 2007, I wouldn’t have made much of its generic tone, or of the precise, box-ticking quality of its critical observations.
Eugene, serious and somewhat solemn, had been listening with bemusement. “I would not cut and paste like he did, because I’m a lot more paranoid,” he said. He’s a couple of years younger than Alex and was in high school when ChatGPT was released. At the time, he experimented with A.I. for essays but noticed that it made easily noticed errors. “This passed the A.I. detector?” he asked Alex.
When ChatGPT launched, instructors adopted various measures to insure that students’ work was their own. These included requiring them to share time-stamped version histories of their Google documents, and designing written assignments that had to be completed in person, over multiple sessions. But most detective work occurs after submission. Services like GPTZero, Copyleaks, and Originality.ai analyze the structure and syntax of a piece of writing and assess the likelihood that it was produced by a machine. Alex said that his art-history professor was “hella old,” and therefore probably didn’t know about such programs. We fed the paper into a few different A.I.-detection websites. One said there was a twenty-eight-per-cent chance that the paper was A.I.-generated; another put the odds at sixty-one per cent. “That’s better than I expected,” Eugene said.
I asked if he thought what his friend had done was cheating, and Alex interrupted: “Of course. Are you fucking kidding me?”
As we looked at Alex’s laptop, I noticed that he had recently asked ChatGPT whether it was O.K. to go running in Nike Dunks. He had concluded that ChatGPT made for the best confidant. He consulted it as one might a therapist, asking for tips on dating and on how to stay motivated during dark times. His ChatGPT sidebar was an index of the highs and lows of being a young person. He admitted to me and Eugene that he’d used ChatGPT to draft his application to N.Y.U.—our lunch might never have happened had it not been for A.I. “I guess it’s really dishonest, but, fuck it, I’m here,” he said.
“It’s cheating, but I don’t think it’s, like, cheating,” Eugene said. He saw Alex’s art-history essay as a victimless crime. He was just fulfilling requirements, not training to become a literary scholar.
Alex had to rush off to his study session. I told Eugene that our conversation had made me wonder about my function as a professor. He asked if I taught English, and I nodded.
“Mm, O.K.,” he said, and laughed. “So you’re, like, majorly affected.”
I teach at a small liberal-arts college, and I often joke that a student is more likely to hand in a big paper a year late (as recently happened) than to take a dishonorable shortcut. My classes are small and intimate, driven by processes and pedagogical modes, like letting awkward silences linger, that are difficult to scale. As a result, I have always had a vague sense that my students are learning something, even when it is hard to quantify. In the past, if I was worried that a paper had been plagiarized, I would enter a few phrases from it into a search engine and call it due diligence. But I recently began noticing that some students’ writing seemed out of synch with how they expressed themselves in the classroom. One essay felt stitched together from two minds—half of it was polished and rote, the other intimate and unfiltered. Having never articulated a policy for A.I., I took the easy way out. The student had had enough shame to write half of the essay, and I focussed my feedback on improving that part.
It’s easy to get hung up on stories of academic dishonesty. Late last year, in a survey of college and university leaders, fifty-nine per cent reported an increase in cheating, a figure that feels conservative when you talk to students. A.I. has returned us to the question of what the point of higher education is. Until we’re eighteen, we go to school because we have to, studying the Second World War and reducing fractions while undergoing a process of socialization. We’re essentially learning how to follow rules. College, however, is a choice, and it has always involved the tacit agreement that students will fulfill a set of tasks, sometimes pertaining to subjects they find pointless or impractical, and then receive some kind of credential. But even for the most mercenary of students, the pursuit of a grade or a diploma has come with an ancillary benefit. You’re being taught how to do something difficult, and maybe, along the way, you come to appreciate the process of learning. But the arrival of A.I. means that you can now bypass the process, and the difficulty, altogether.
There are no reliable figures for how many American students use A.I., just stories about how everyone is doing it. A 2024 Pew Research Center survey of students between the ages of thirteen and seventeen suggests that a quarter of teens currently use ChatGPT for schoolwork, double the figure from 2023. OpenAI recently released a report claiming that one in three college students uses its products. There’s good reason to believe that these are low estimates. If you grew up Googling everything or using Grammarly to give your prose a professional gloss, it isn’t far-fetched to regard A.I. as just another productivity tool. “I see it as no different from Google,” Eugene said. “I use it for the same kind of purpose.”
Being a student is about testing boundaries and staying one step ahead of the rules. While administrators and educators have been debating new definitions for cheating and discussing the mechanics of surveillance, students have been embracing the possibilities of A.I. A few months after the release of ChatGPT, a Harvard undergraduate got approval to conduct an experiment in which it wrote papers that had been assigned in seven courses. The A.I. skated by with a 3.57 G.P.A., a little below the school’s average. Upstart companies introduced products that specialized in “humanizing” A.I.-generated writing, and TikTok influencers began coaching their audiences on how to avoid detection.
Unable to keep pace, academic administrations largely stopped trying to control students’ use of artificial intelligence and adopted an attitude of hopeful resignation, encouraging teachers to explore the practical, pedagogical applications of A.I. In certain fields, this wasn’t a huge stretch. Studies show that A.I. is particularly effective in helping non-native speakers acclimate to college-level writing in English. In some STEM classes, using generative A.I. as a tool is acceptable. Alex and Eugene told me that their accounting professor encouraged them to take advantage of free offers on new A.I. products available only to undergraduates, as companies competed for student loyalty throughout the spring. In May, OpenAI announced ChatGPT Edu, a product specifically marketed for educational use, after schools including Oxford University, Arizona State University, and the University of Pennsylvania’s Wharton School of Business experimented with incorporating A.I. into their curricula. This month, the company detailed plans to integrate ChatGPT into every dimension of campus life, with students receiving “personalized” A.I. accounts to accompany them throughout their years in college.
But for English departments, and for college writing in general, the arrival of A.I. has been more vexed. Why bother teaching writing now? The future of the midterm essay may be a quaint worry compared with larger questions about the ramifications of artificial intelligence, such as its effect on the environment, or the automation of jobs. And yet has there ever been a time in human history when writing was so important to the average person? E-mails, texts, social-media posts, angry missives in comments sections, customer-service chats—let alone one’s actual work. The way we write shapes our thinking. We process the world through the composition of text dozens of times a day, in what the literary scholar Deborah Brandt calls our era of “mass writing.” It’s possible that the ability to write original and interesting sentences will become only more important in a future where everyone has access to the same A.I. assistants.
Corey Robin, a writer and a professor of political science at Brooklyn College, read the early stories about ChatGPT with skepticism. Then his daughter, a sophomore in high school at the time, used it to produce an essay that was about as good as those his undergraduates wrote after a semester of work. He decided to stop assigning take-home essays. For the first time in his thirty years of teaching, he administered in-class exams.
Robin told me he finds many of the steps that universities have taken to combat A.I. essays to be “hand-holding that’s not leading people anywhere.” He has become a believer in the passage-identification blue-book exam, in which students name and contextualize excerpts of what they’ve read for class. “Know the text and write about it intelligently,” he said. “That was a way of honoring their autonomy without being a cop.”
His daughter, who is now a senior, complains that her teachers rarely assign full books. And Robin has noticed that college students are more comfortable with excerpts than with entire articles, and prefer short stories to novels. “I don’t get the sense they have the kind of literary or cultural mastery that used to be the assumption upon which we assigned papers,” he said. One study, published last year, found that fifty-eight per cent of students at two Midwestern universities had so much trouble interpreting the opening paragraphs of “Bleak House,” by Charles Dickens, that “they would not be able to read the novel on their own.” And these were English majors.
The return to pen and paper has been a common response to A.I. among professors, with sales of blue books rising significantly at certain universities in the past two years. Siva Vaidhyanathan, a professor of media studies at the University of Virginia, grew dispirited after some students submitted what he suspected was A.I.-generated work for an assignment on how the school’s honor code should view A.I.-generated work. He, too, has decided to return to blue books, and is pondering the logistics of oral exams. “Maybe we go all the way back to 450 B.C.,” he told me.
But other professors have renewed their emphasis on getting students to see the value of process. Dan Melzer, the director of the first-year composition program at the University of California, Davis, recalled that “everyone was in a panic” when ChatGPT first hit. Melzer’s job is to think about how writing functions across the curriculum so that all students, from prospective scientists to future lawyers, get a chance to hone their prose. Consequently, he has an accommodating view of how norms around communication have changed, especially in the internet age. He was sympathetic to kids who viewed some of their assignments as dull and mechanical and turned to ChatGPT to expedite the process. He called the five-paragraph essay—the classic “hamburger” structure, consisting of an introduction, three supporting body paragraphs, and a conclusion—“outdated,” having descended from élitist traditions.
Melzer believes that some students loathe writing because of how it’s been taught, particularly in the past twenty-five years. The No Child Left Behind Act, from 2002, instituted standards-based reforms across all public schools, resulting in generations of students being taught to write according to rigid testing rubrics. As one teacher wrote in the Washington Post in 2013, students excelled when they mastered a form of “bad writing.” Melzer has designed workshops that treat writing as a deliberative, iterative process involving drafting, feedback (from peers and also from ChatGPT), and revision.
“If you assign a generic essay topic and don’t engage in any process, and you just collect it a month later, it’s almost like you’re creating an environment tailored to crime,” he said. “You’re encouraging crime in your community!”
I found Melzer’s pedagogical approach inspiring; I instantly felt bad for routinely breaking my class into small groups so that they could “workshop” their essays, as though the meaning of this verb were intuitively clear. But, as a student, I’d have found Melzer’s focus on process tedious—it requires a measure of faith that all the work will pay off in the end. Writing is hard, regardless of whether it’s a five-paragraph essay or a haiku, and it’s natural, especially when you’re a college student, to want to avoid hard work—this is why classes like Melzer’s are compulsory. “You can imagine that students really want to be there,” he joked.
College is all about opportunity costs. One way of viewing A.I. is as an intervention in how people choose to spend their time. In the early nineteen-sixties, college students spent an estimated twenty-four hours a week on schoolwork. Today, that figure is about fifteen, a sign, to critics of contemporary higher education, that young people are beneficiaries of grade inflation—in a survey conducted by the Harvard Crimson, nearly eighty per cent of the class of 2024 reported a G.P.A. of 3.7 or higher—and lack the diligence of their forebears. I don’t know how many hours I spent on schoolwork in the late nineties, when I was in college, but I recall feeling that there was never enough time. I suspect that, even if today’s students spend less time studying, they don’t feel significantly less stressed. It’s the nature of campus life that everyone assimilates into a culture of busyness, and a lot of that anxiety has been shifted to extracurricular or pre-professional pursuits. A dean at Harvard remarked that students feel compelled to find distinction outside the classroom because they are largely indistinguishable within it.
Eddie, a sociology major at Long Beach State, is older than most of his classmates. He graduated high school in 2010, and worked full time while attending a community college. “I’ve gone through a lot to be at school,” he told me. “I want to learn as much as I can.” ChatGPT, which his therapist recommended to him, was ubiquitous at Long Beach even before the California State University system, which Long Beach is a part of, announced a partnership with OpenAI, giving its four hundred and sixty thousand students access to ChatGPT Edu. “I was a little suspicious of how convenient it was,” Eddie said. “It seemed to know a lot, in a way that seemed so human.”
He told me that he used A.I. “as a brainstorm” but never for writing itself. “I limit myself, for sure.” Eddie works for Los Angeles County, and he was talking to me during a break. He admitted that, when he was pressed for time, he would sometimes use ChatGPT for quizzes. “I don’t know if I’m telling myself a lie,” he said. “I’ve given myself opportunities to do things ethically, but if I’m rushing to work I don’t feel bad about that,” particularly for courses outside his major.
I recognized Eddie’s conflict. I’ve used ChatGPT a handful of times, and on one occasion it accomplished a scheduling task so quickly that I began to understand the intoxication of hyper-efficiency. I’ve felt the need to stop myself from indulging in idle queries. Almost all the students I interviewed in the past few months described the same trajectory: from using A.I. to assist with organizing their thoughts to off-loading their thinking altogether. For some, it became something akin to social media, constantly open in the corner of the screen, a portal for distraction. This wasn’t like paying someone to write a paper for you—there was no social friction, no aura of illicit activity. Nor did it feel like sharing notes, or like passing off what you’d read in CliffsNotes or SparkNotes as your own analysis. There was no real time to reflect on questions of originality or honesty—the student basically became a project manager. And for students who use it the way Eddie did, as a kind of sounding board, there’s no clear threshold where the work ceases to be an original piece of thinking. In April, Anthropic, the company behind Claude, released a report drawn from a million anonymized student conversations with its chatbots. It suggested that more than half of user interactions could be classified as “collaborative,” involving a dialogue between student and A.I. (Presumably, the rest of the interactions were more extractive.)
May, a sophomore at Georgetown, was initially resistant to using ChatGPT. “I don’t know if it was an ethics thing,” she said. “I just thought I could do the assignment better, and it wasn’t worth the time being saved.” But she began using it to proofread her essays, and then to generate cover letters, and now she uses it for “pretty much all” her classes. “I don’t think it’s made me a worse writer,” she said. “It’s perhaps made me a less patient writer. I used to spend hours writing essays, nitpicking over my wording, really thinking about how to phrase things.” College had made her reflect on her experience at an extremely competitive high school, where she had received top grades but retained very little knowledge. As a result, she was the rare student who found college somewhat relaxed. ChatGPT helped her breeze through busywork and deepen her engagement with the courses she felt passionate about. “I was trying to think, Where’s all this time going?” she said. I had never envied a college student until she told me the answer: “I sleep more now.”
Harry Stecopoulos oversees the University of Iowa’s English department, which has more than eight hundred majors. On the first day of his introductory course, he asks students to write by hand a two-hundred-word analysis of the opening paragraph of Ralph Ellison’s “Invisible Man.” There are always a few grumbles, and students have occasionally walked out. “I like the exercise as a tone-setter, because it stresses their writing,” he told me.
The return of blue-book exams might disadvantage students who were encouraged to master typing at a young age. Once you’ve grown accustomed to the smooth rhythms of typing, reverting to a pen and paper can feel stifling. But neuroscientists have found that the “embodied experience” of writing by hand taps into parts of the brain that typing does not. Being able to write one way—even if it’s more efficient—doesn’t make the other way obsolete. There’s something lofty about Stecopoulos’s opening-day exercise. But there’s another reason for it: the handwritten paragraph also begins a paper trail, attesting to voice and style, that a teaching assistant can consult if a suspicious paper is submitted.
Kevin, a third-year student at Syracuse University, recalled that, on the first day of a class, the professor had asked everyone to compose some thoughts by hand. “That brought a smile to my face,” Kevin said. “The other kids are scratching their necks and sweating, and I’m, like, This is kind of nice.”
Kevin had worked as a teaching assistant for a mandatory course that first-year students take to acclimate to campus life. Writing assignments involved basic questions about students’ backgrounds, he told me, but they often used A.I. anyway. “I was very disturbed,” he said. He occasionally uses A.I. to help with translations for his advanced Arabic course, but he’s come to look down on those who rely heavily on it. “They almost forget that they have the ability to think,” he said. Like many former holdouts, Kevin felt that his judicious use of A.I. was more defensible than his peers’ use of it.
As ChatGPT begins to sound more human, will we reconsider what it means to sound like ourselves? Kevin and some of his friends pride themselves on having an ear attuned to A.I.-generated text. The hallmarks, he said, include a preponderance of em dashes and a voice that feels blandly objective. An acquaintance had run an essay that she had written herself through a detector, because she worried that she was starting to phrase things like ChatGPT did. He read her essay: “I realized, like, It does kind of sound like ChatGPT. It was freaking me out a little bit.”
A particularly disarming aspect of ChatGPT is that, if you point out a mistake, it communicates in the backpedalling tone of a contrite student. (“Apologies for the earlier confusion. . . .”) Its mistakes are often referred to as hallucinations, a description that seems to anthropomorphize A.I., conjuring a vision of a sleep-deprived assistant. Some professors told me that they had students fact-check ChatGPT’s work, as a way of discussing the importance of original research and of showing the machine’s fallibility. Hallucination rates have grown worse for most A.I.s, with no single reason for the increase. As a researcher told the Times, “We still don’t know how these models work exactly.”
But many students claim to be unbothered by A.I.’s mistakes. They appear nonchalant about the question of achievement, and even dissociated from their work, since it is only notionally theirs. Joseph, a Division I athlete at a Big Ten school, told me that he saw no issue with using ChatGPT for his classes, but he did make one exception: he wanted to experience his African-literature course “authentically,” because it involved his heritage. Alex, the N.Y.U. student, said that if one of his A.I. papers received a subpar grade his disappointment would be focussed on the fact that he’d spent twenty dollars on his subscription. August, a sophomore at Columbia studying computer science, told me about a class where she was required to compose a short lecture on a topic of her choosing. “It was a class where everyone was guaranteed an A, so I just put it in and I maybe edited like two words and submitted it,” she said. Her professor identified her essay as exemplary work, and she was asked to read from it to a class of two hundred students. “I was a little nervous,” she said. But then she realized, “If they don’t like it, it wasn’t me who wrote it, you know?”
Kevin, by contrast, desired a more general kind of moral distinction. I asked if he would be bothered to receive a lower grade on an essay than a classmate who’d used ChatGPT. “Part of me is able to compartmentalize and not be pissed about it,” he said. “I developed myself as a human. I can have a superiority complex about it. I learned more.” He smiled. But then he continued, “Part of me can also be, like, This is so unfair. I would have loved to hang out with my friends more. What did I gain? I made my life harder for all that time.”
In my conversations, just as college students invariably thought of ChatGPT as merely another tool, people older than forty focussed on its effects, drawing a comparison to G.P.S. and the erosion of our relationship to space. The London cabdrivers rigorously trained in “the knowledge” famously developed abnormally large posterior hippocampi, the part of the brain crucial for long-term memory and spatial awareness. And yet, in the end, most people would probably rather have swifter travel than sharper memories. What is worth preserving, and what do we feel comfortable off-loading in the name of efficiency?
What if we take seriously the idea that A.I. assistance can accelerate learning—that students today are arriving at their destinations faster? In 2023, researchers at Harvard introduced a self-paced A.I. tutor in a popular physics course. Students who used the A.I. tutor reported higher levels of engagement and motivation and did better on a test than those who were learning from a professor. May, the Georgetown student, told me that she often has ChatGPT produce extra practice questions when she’s studying for a test. Could A.I. be here not to destroy education but to revolutionize it? Barry Lam teaches in the philosophy department at the University of California, Riverside, and hosts a popular podcast, Hi-Phi Nation, which applies philosophical modes of inquiry to everyday topics. He began wondering what it would mean for A.I. to actually be a productivity tool. He spoke to me from the podcast studio he built in his shed. “Now students are able to generate in thirty seconds what used to take me a week,” he said. He compared education to carpentry, one of his many hobbies. Could you skip to using power tools without learning how to saw by hand? If students were learning things faster, then it stood to reason that Lam could assign them “something very hard.” He wanted to test this theory, so for final exams he gave his undergraduates a Ph.D.-level question involving denotative language and the German logician Gottlob Frege which was, frankly, beyond me.
“They fucking failed it miserably,” he said. He adjusted his grading curve accordingly.
Lam doesn’t find the use of A.I. morally indefensible. “It’s not plagiarism in the cut-and-paste sense,” he argued, because there’s technically no original version. Rather, he finds it a potential waste of everyone’s time. At the start of the semester, he has told students, “If you’re gonna just turn in a paper that’s ChatGPT-generated, then I will grade all your work by ChatGPT and we can all go to the beach.”
Nobody gets into teaching because he loves grading papers. I talked to one professor who rhapsodized about how much more his students were learning now that he’d replaced essays with short exams. I asked if he missed marking up essays. He laughed and said, “No comment.” An undergraduate at Northeastern University recently accused a professor of using A.I. to create course materials; she filed a formal complaint with the school, requesting a refund for some of her tuition. The dustup laid bare the tension between why many people go to college and why professors teach. Students are raised to understand achievement as something discrete and measurable, but when they arrive at college there are people like me, imploring them to wrestle with difficulty and abstraction. Worse yet, they are told that grades don’t matter as much as they did when they were trying to get into college—only, by this point, students are wired to find the most efficient path possible to good marks.
As the craft of writing is degraded by A.I., original writing has become a valuable resource for training language models. Earlier this year, a company called Catalyst Research Alliance advertised “academic speech data and student papers” from two research studies run in the late nineties and mid-two-thousands at the University of Michigan. The school asked the company to halt its work—the data was available for free to academics anyway—and a university spokesperson said that student data “was not and has never been for sale.” But the situation did lead many people to wonder whether institutions would begin viewing original student work as a potential revenue stream.
According to a recent study from the Organisation for Economic Co-operation and Development, human intellect has declined since 2012. An assessment of tens of thousands of adults in nearly thirty countries showed an over-all decade-long drop in test scores for math and for reading comprehension. Andreas Schleicher, the director for education and skills at the O.E.C.D., hypothesized that the way we consume information today—often through short social-media posts—has something to do with the decline in literacy. (One of Europe’s top performers in the assessment was Estonia, which recently announced that it will bring A.I. to some high-school students in the next few years, sidelining written essays and rote homework exercises in favor of self-directed learning and oral exams.)
Lam, the philosophy professor, used to be a colleague of mine, and for a brief time we were also neighbors. I’d occasionally look out the window and see him building a fence, or gardening. He’s an avid amateur cook, guitarist, and carpenter, and he remains convinced that there is value to learning how to do things the annoying, old-fashioned, and—as he puts it—“artisanal” way. He told me that his wife, Shanna Andrawis, who has been a high-school teacher since 2008, frequently disagreed with his cavalier methods for dealing with large learning models. Andrawis argues that dishonesty has always been an issue. “We are trying to mass educate,” she said, meaning there’s less room to be precious about the pedagogical process. “I don’t have conversations with students about ‘artisanal’ writing. But I have conversations with them about our relationship. Respect me enough to give me your authentic voice, even if you don’t think it’s that great. It’s O.K. I want to meet you where you’re at.”
Ultimately, Andrawis was less fearful of ChatGPT than of the broader conditions of being young these days. Her students have grown increasingly introverted, staring at their phones with little desire to “practice getting over that awkwardness” that defines teen life, as she put it. A.I. might contribute to this deterioration, but it isn’t solely to blame. It’s “a little cherry on top of an already really bad ice-cream sundae,” she said.
When the school year began, my feelings about ChatGPT were somewhere between disappointment and disdain, focussed mainly on students. But, as the weeks went by, my sense of what should be done and who was at fault grew hazier. Eliminating core requirements, rethinking G.P.A., teaching A.I. skepticism—none of the potential fixes could turn back the preconditions of American youth. Professors can reconceive of the classroom, but there is only so much we control. I lacked faith that educational institutions would ever regard new technologies as anything but inevitable. Colleges and universities, many of which had tried to curb A.I. use just a few semesters ago, rushed to partner with companies like OpenAI and Anthropic, deeming a product that didn’t exist four years ago essential to the future of school.
Except for a year spent bumming around my home town, I’ve basically been on a campus for the past thirty years. Students these days view college as consumers, in ways that never would have occurred to me when I was their age. They’ve grown up at a time when society values high-speed takes, not the slow deliberation of critical thinking. Although I’ve empathized with my students’ various mini-dramas, I rarely project myself into their lives. I notice them noticing one another, and I let the mysteries of their lives go. Their pressures are so different from the ones I felt as a student. Although I envy their metabolisms, I would not wish for their sense of horizons.
Education, particularly in the humanities, rests on a belief that, alongside the practical things students might retain, some arcane idea mentioned in passing might take root in their mind, blossoming years in the future. A.I. allows any of us to feel like an expert, but it is risk, doubt, and failure that make us human. I often tell my students that this is the last time in their lives that someone will have to read something they write, so they might as well tell me what they actually think.
Despite all the current hysteria around students cheating, they aren’t the ones to blame. They did not lobby for the introduction of laptops when they were in elementary school, and it’s not their fault that they had to go to school on Zoom during the pandemic. They didn’t create the A.I. tools, nor were they at the forefront of hyping technological innovation. They were just early adopters, trying to outwit the system at a time when doing so has never been so easy. And they have no more control than the rest of us. Perhaps they sense this powerlessness even more acutely than I do. One moment, they are being told to learn to code; the next, it turns out employers are looking for the kind of “soft skills” one might learn as an English or a philosophy major. In February, a labor report from the Federal Reserve Bank of New York reported that computer-science majors had a higher unemployment rate than ethnic-studies majors did—the result, some believed, of A.I. automating entry-level coding jobs.
None of the students I spoke with seemed lazy or passive. Alex and Eugene, the N.Y.U. students, worked hard—but part of their effort went to editing out anything in their college experiences that felt extraneous. They were radically resourceful.
When classes were over and students were moving into their summer housing, I e-mailed with Alex, who was settling in in the East Village. He’d just finished his finals, and estimated that he’d spent between thirty minutes and an hour composing two papers for his humanities classes. Without the assistance of Claude, it might have taken him around eight or nine hours. “I didn’t retain anything,” he wrote. “I couldn’t tell you the thesis for either paper hahhahaha.” He received an A-minus and a B-plus.
307 notes
·
View notes
Text
#best sourcing specialists in india#ai-driven hardware development#mobile app development#custom ai model development
0 notes
Text
RenDog x Louis Vuitton
18/18 of LifeStyle: A Life Series Fashion Zine!!
-
Last, but most certainly not least, we have the Red King Mr. RenDiggityDog himself. I knew the instant I saw the reference for this pose that it would be what I use for Ren - the model had the perfect amount of charisma and attitude, and I think it fits him just perfectly. And before anyone asks, no, I didn't draw the pattern on his shirt by hand! I pulled it, and most of the repeating patterns for this whole series, directly from the item or brand site I was working with, to save time and my wrist (and my sanity).
-
(Click through for my Sappy Conclusions under the cut)
And with that (except for a special little bonus illustration vis a vis the unused Bdubs piece), we are finished with the LifeStyle zine. All 18 of the official pieces have been posted, almost exactly a year after I first saw a red shirt in the window display of an Armani store and started to compile a list of designers and brands on my phone notes app. The pieces are laid out here before you on my socials. A print copy of the zine sits on my bedroom shelf.
I really, truly could not have imagined the amount of love and support this community has poured out for these pieces. I am being 100% honest when I say I thought I'd be posting these into the void. Every single effusive tag, ever positive comment, and every single like means so much to me, from the bottom of my heart, especially for a project that was as passion driven as this one was for me.
This is the first time I can say that I've truly finished a long term project of mine, despite having ups and downs and stops and starts in between, and it feels surreal to be stepping away and calling it complete. But I also know that the community loved it just as much as I did, and it's made me even more passionate about wanting to make and do more moving ahead both for the MCYT and Life Series fandoms and far beyond, into my own original stories and crafts.
So here's to many more, for me and for all of you! Thank you so much for all your amazing support!!!!
#llsmp#trafficblr#third life#rendog#renthedog#louis vuitton#mcyt#illustration#digital art#fashion design#fanart#my art#queen.jpeg#traffic smp#lifestyle zine#im not crying i just put eyedrops in ૮ ⸝⸝o̴̶̷᷄ ·̭ o̴̶̷̥᷅⸝⸝ ྀིა⸝#i speak#rendog fanart
525 notes
·
View notes
Note
got a question I was hoping you could answer!
why do all apps have to go through an app store? why doesn't anywhere have their app downloadable from the internet or something?
was wondering this because lots of issues with apps seem to stem from having to comply with app store guidelines and whatnot. So why not avoid that problem and make the app available off the appstore? And if part of it is because they're easier to find in the appstore, why not do both? why not also offer the download on a website or something?
there's gotta be some reason why there's afaik no one who offers a download for their app without the appstore right?
There are absolutely other ways to get apps, and the one that springs immediately to mind is the F-Droid App Repository.
Sideloading is the process of loading an app that doesn't come from your phone's OS-approved app store. It's really easy on Android (basically just a couple of clicks) but requires jailbreaking on an iphone.
The reason more USERS don't sideload apps is risk: app stores put apps through at least nominal security checks to ensure that they aren't hosting malware. If you get an app from the app store that is malware, you can report it and it will get taken down, but nobody is forcing some random developer who developed his own app to remove it from his site if it installs malware on your phone unless you get law enforcement involved.
The reason more developers don't go outside of the app store or don't WANT to go outside of the app store is money. The number of users who are going to sideload apps is *tiny* compared to the number of users who will go through the app store; that makes a HUGE difference in terms of income, so most developers try to keep it app-store friendly. Like, if tumblr were to say "fuck the app store" and just release their own app that you could download from the sidebar a few things would happen:
Downloads would drop to a fraction of their prior numbers instantly
iOS users would largely be locked out of using tumblr unless they fuck with their phones in a way that violates Apple's TOS and could get them booted out of their iOS ecosystem if they piss off the wrong people.
Ad revenue would collapse because not a lot of advertisers want to work with companies that are app-store unfriendly
They'd be kicked off of the main app marketplaces
So most people who develop apps don't want to put the time and effort and money into developing an app that people might not pay for that then also can't carry ads.
Which leads into another issue: the kind of people who generally make and use sideloaded app aren't the kind of people who generally like profit-driven models. Indie apps are often slow to update and have minimal support because you're usually dealing with a tiny team of creators with a userbase of people who can almost certainly name ten flavors of Linux and are thus expected to troubleshoot and solve their own problems.
If this is the kind of thing you want to try, have at it. I'd recommend sticking to apps from the F-Droid Repository linked up above and being judicious about what you install. If you're using apple and would have to jailbreak your phone to get a non-approved app on it, I'd recommend switching to another type of phone.
(For the record, you also aren't limited to android or ios as the operating system of your phone; there are linux-based OSs out there and weird mutations of android and such - I am not really a phone person so I can't tell you much about them, but they are out there!)
197 notes
·
View notes
Text
hey! did you know that duolingo is turning into an ai-driven company? here's what that means: per the USA Today article posted yesterday on this: "Duolingo is going to be "AI-first", the educational technology company announced, adding that it is replacing contract workers with artificial intelligence."
now yes. duolingo has used ai in the past. in 2024, the Duolingo Guides page reported that their AI uses user data to improve models. however, this did not literally replace human beings in the process.
according to their CEO, they believe this is going to be an example of how "generative AI can directly benefit our learners". despite the fact this doesn't benefit the human contractors you're replacing in place of robots.
here's why supporting AI is problematic: not only does it take away from human beings, but it also is bad for the environment. studies have shown that particularly during the training of complex models, the infrastructure needed to do this training results in high energy consumption.
this also leads to increased greenhouse gas emissions and puts a strain on water resources that are used for cooling data censors. as well, studies show that the manufacturing and disposal of AI hardware contribute to electronic waste and resource depletion.
so, below the cut, there's a list below of language learning apps that do what duolingo does but aren't driven by robots! (and no, you do not have to stop using duolingo. these other apps are simply ones that provide the same services while taking care of both the environment and - as far as i know - human employees.)
Mango Languages (4.8 stars on the Apple App Store) - this app has had really great reviews from people, citing that you can get free access by using your library card (so it supports libraries!) and teaches actual pronunciation whereas duo has been reported for not being accurate with its teachings.
Babbel (4.7 stars on the Apple App Store) - people have rated this app as being easy to use and convenient! they also report that they are backed by researchers at Yale University and Michigan State University, with MSU reporting that after 10 hours, researchers found that 96% of users saw better test scores and 73% became better speakers!
Lingvist (4.6 stars on the Apple App Store) - reviews cite this as a practical app, with one person saying it focuses on repetition and frequently spoken words instead of nouns and verbs you'd rarely use.
any other suggestions in my inbox will be included in this list. stop supporting "AI-first" companies, support human beings.
#maeberzatto#mae's blurbs!#duolingo#babbel#mango languages#lingvist#language learning#learning study#education#languages
42 notes
·
View notes
Text

—— RISPWRKIVES - INTRO - NAVI
DISCLAIMER!!!
This is a work of fiction. It is not intended to reflect the real personalities or actions of any real-life individuals. All characters and events are purely imaginary. Hate or extreme criticism will not be tolerated. If you do not like the content, please simply do not read it. Warnings are provided so read at your own risk!
Main account : @kookiesncreamri
Fic rec account : soon!!
Join my taglist | main account m.list
RKIVED FICS | YANDERE/DARK FICS
⋆。゚☁︎。⋆。 ゚☾ ゚。⋆ More ⬇️
RULES FOR MY BLOG
- MDNI!!! IF YOU ARE BELOW 18 KINDLY LEAVE MY BLOG AS MY FICS CONTAIN ALOT OF MATURE THEMES
- READ AT YOUR OWN RISKS!
- No reposting or stealing my work. Do not copy, translate, or reupload my fics anywhere without permission.
- No hate, harassment, or unsolicited critiques. If you don’t like dark content, just scroll away or block. I’m not here to cater to everyone.
- Be kind in the inbox. Curious? Want to req? Sure. But if you’re rude, disrespectful, or pushy. respectfully, you’ll be blocked.
╰┈➤ ❝MY FICS INCLUDES THE FOLLOWING❞
yandere behavior / obsessive love / possessiveness / kidnapping / murder / stalking / psychological manipulation / gaslighting / noncon (always tagged) / emotional and physical abuse / unhealthy power dynamics / trauma and PTSD mentions / forced affection / toxic relationships / character death / mafia or criminal elements / gore / torture / stockholm syndrome / dark romance / smut (18+) / angst / unreliable narrators / morally gray characters / twisted comfort / idolverse and AU settings / revenge-driven plots / mental health triggers such as suicide or self-harm mentions.
Please read all content warnings carefully and proceed at your own risk.
(Tags or warnings that may not be included in that list will be in the fics warning list)
╰┈➤ ❝WHAT FICS OF MINE SHOULD U EXPECT TO BE REPOSTED HERE?❞
-
POSE FOR ME - JJK FF - MODEL!JK x PHOTOGRAPHER/CREATIVE DIRECTOR! OC
MORE!!
╰┈➤ ❝HOW DO YOU WRITE YOUR FICS?❞
- i find inspirations from things whether it’s songs, movies or prompts
- i start draft writing or whatever it’s called😭 just to test..? If i can write it well or not
- start planning. When i plan i use notion mostly.
- “where do you write your fics before posting?” Wattpad. I use the drafts or something in wattpad since i share computer with my mom and if i use google docs im dead if she finds out im writing smut and shit (strict parents).
- i start writingg! Writing series parts usually takes me like 6 hours… although it really depends on how long i want it to be.
- i start editing the banner for the fic. For that i use canva, hypic and pinterest.
- after writing i use both tools. Grammarly while writing and chatgpt for grammar checking. I try to make sure my grammar is alright and both apps help me very well… so if you put it at an ai detector yes it will come out as “ai” because i use it to help me correct grammar and no i do not make ai write it. I writ my own fics very much.
And that’s pretty much it!!!
-
I am rispwr/hellokittykookies. I deleted my account due to personal reasons so i opened a new one. I wanted to open this new acc besides the kookiesncreamri to seperate my old fics, new fics AND ESPECIALLY YANDERE/DARK THEMES FICS.
My friend @ririkookiemonster gave me an idea to make a seperate account for my fucked up fics if i ever write one again so ofc credits to her for giving me this idea.
20 notes
·
View notes
Text
pain mv ramblings
ok as promised i'm actually typing up my thoughts for PAIN, of which i have many and will probably not be very eloquent about but hear me out let me cook, etc etc (long ish post ahead, and this barely touches the insane 3H item lore drop. anyway).
we start with xia fei--and with allusions to christianity wrt the garden of eden; however here 'paradise' is, importantly, distinguished and in direct opposition to eden. xia fei is a tempter, like the snake, and lures 'the traveller' in with the promise of paradise with him -> vampires are somewhat associated with eroticism and seduction, and since we're already looking at these biblical allusions, i think it'd be interesting to associate each section with a deadly sin. in that case, xia fei's chapter is one of lust.
then we've got undead vein—one of his lyrics reference neverland, so we're moving away from biblical mythology and into fairy tale ! deeply deeply unwell about the img of vein in the coffin, basically inviting you in, but more than that lol, it gives me a sense of...luring one to their death? violence and vein go hand in hand after all. plus @miyamiwu pointed out on vfei nation dscord that in the og iteration peter pan kills the lost boys (if they start to show signs of maturing/growing up); "he has a fixation on control." maybe a stretch but, for fun, i think wrath fits vein--luring you in only to brutally murder you. yay <3
final chapter is liu xiao; he is as always ever the enigma, but i kind of like the ambiguity. his section stands out as v clearly seperate from veifei's chapters—where they are predominantly red, he's all purple; it's the most musically distinct section; his subtitle isn't a descriptor but an instruction (i was gonna say imperative but,,) 'dance with him'—control, manipulation, and lack of agency is overt here. he's not trying to tempt or seduce or lure, he's just having a freaky time w his puppet who Could v well be symbolic of ltc im on board w that. there are references to the nutcracker (so the dance is v fitting). anyway im getting off track. his deadly sin is pride, because as written in my notes app: he thinks he's the shit
(edit: idk that i agree w myself saying pride. lol. like i said, just for fun.)
edit/p.s 2: i am not a lx thinker but in light of the item lore drop i think there is smth vaguely genuine in his want for someone to dance w him, w/o a sense of transaction. perhaps
further thoughts:
vampfei is so !!! wrt seduction/eroticism/being a model and undeadvein is narratively important and thematically resonant w veins horror motifs (thinks about the nightmare sequences in yingdu. vein Is a living nightmare to lg lmao)
but also interested that Neverland/boyhood is brought into the mix—violence and vengeance and killing 'lost' boys vs xf's charming seduction of falling into sin (paradise) with him; vein is more like, fall into ruin/the grave LOL
(edit: there's also an element of active pursuit/being hunted--you clear xf's lvl by tearing off his mask, but in vein's lvl it's about evading his capture)
also the peter pan association does add this different, tragic dimension to vein's character. thinking about LH0 insinuating that vein is mostly driven by his own amusement, and how that may come from a somewhat juvenile place. in addition neverland is a place where people forget their pasts—in peter pan his shadow is detachable, which i find v interesting in relation to vein-xiao weiying and how vein separates himself from his past. also !!! will have to find this post but i remember that an interpretation/translation of xiao weiying is miserable/dreary shadow (or reflection)....
speaking of—reflections/appearances/falsehood is a big theme here—the blood splattered pictures of model!fei in the elevator, and him at the centre (looking v suspiciously similar to vein style wise i might say. another win for vfei reflecting one another), the wall of framed pictures of vein in memoriam; lx's Wrong reflection while he dances w the puppet—xf even has a lyric in his verse referencing illusion.
i already mentioned xf as The Snake, but i need to reinstate that and add that he seems to be posing as the deer (the brooch 👀. trying to make himself seem charming and approachable, etc)
(p.s I SAID THIS BEFORE THE ITEM LORE DROP AND THE FACT THE ITEM YOU NEED TO UNVEIL FELIX'S MASK IS THE DEER BROOCH!!! IM NORMAL!!!!)
anyway like i said at the start i didn't talk much about the Lore that we got from the item sheets but trust me i have and continue to be insane about it. i also feel i didn't emphasise enough how much veifei paralleling/mirroring e/o is. Such a thing !!! to me !!! but it is and u just have to trust and believe <3
#link click#pain mv#xia fei#vein#liu xiao#house of the hot headed#veifei#ness lc tag#corner.txt#is this meta?#<- not rlly but <3 yay tumblr user veifei thoughts
12 notes
·
View notes
Text
Recent and Pinned options are available now for multi-session app’s sitemap (e.g. Customer Service Workspace)
As we would have observed now we have the Recent and and Pinned records option available for the Customer Service workspace app. The users can see the same Recent and Pinned records while moving between the Customer Service workspace and Customer Service Hub app, giving a consistent experience. Recent : Pinned : Get all the details. Hope it helps..
0 notes