#amazon generators code
Explore tagged Tumblr posts
butchlifeguard · 1 year ago
Text
vital part of the Kids Cant Read discourse thats KILLING ME is that the only opinions we see are from english teachers. this is fine when the discussion is ostensibly about literacy but i think we should pass the phone to math teachers and computer science teachers. because im a little suspicious that the focus on stem for the past 20+ years could be a contributing factor
15 notes · View notes
russianreader · 6 months ago
Text
The Terror Scam Gig Economy
"May the good Lord take a liking to you and blow you up real soon!" Four dispatches from Russia's boom terror scam gig economy.
Petersburg police have arrested a 24-year-old freight handler who threw a Molotov cocktail at a military recruitment office (voenkomat). He had been hoodwinked by scammers whom he had contacted himself. The Petrograd District Court remanded Daniil Pavlov in custody to a pretrial detention center, Rotunda’s correspondent reports. Pavlov faces ten to twenty years’ imprisonment on charges of…
0 notes
newcodesociety · 1 year ago
Text
0 notes
iascoa · 2 years ago
Text
FREE Amazon Gift Card Code [2023] Codes Generator
Amazon Gift Card Tricks, Amazon Gift Card Free Codes – Amazon Gift cards are plastic with a built-in microchip loaded with an amount of money that is normally not very high to make purchases or payments defined by the National Commission for the Protection and Defence of Users.
0 notes
dessarchive · 3 months ago
Text
now introducing the future of music and entertainment, endless options of sound (eos)
Tumblr media
eos is an app that i script into most of my drs. i got inspired to create it after coming across a video on youtube of someone re-designing spotifys UI. i’ve used spotify, apple music, amazon music, youtube music, and many more music streaming services and none of them ever lived up to my expectations, so i thought why not create what i’ve always wanted? i took a long hard look at all of the features i wish the current apps had and put them all into one. i posted about it before on my previous account but didn’t go into the actual details much. it started as a music streaming app when it was launched but became much more than that. it’s one of my favorite parts of my drs because i’m an avid music listener and the features are to die for. eos is the only music streaming app in my drs. i also have a ceo dr where i’m the ceo of it because this app has basically become my child!!! anyways here’s what i have after working on it for months!
eos was launched on october 31, 2001 by robyn fenty (she’s older LMAO) and aliyah haughton. the app immediately gained popularity as the first and only music streaming service. it was created as a space where everyone could enjoy music. years later it implemented audiobooks, podcasts, music videos, interviews, merch, and concert tickets. it stands out with its flawless performance that never crashes or has bugs, its exceptional algorithm for fresh experiences each playtime, and its features like eosoulmates that connects users through shared music tastes.
lyrics and subtitles are always available in any language desired by the user and the platform includes organization like no other. exclusive presale codes are able for top listeners of specific artists because the app has its own ticketing system. eos is free for all users as the founders wanted to make a space to unify people during life’s challenge. to maintain this while making sure artists are fairly paid, revenue is generated through a share of concert ticket and merchandise sales, in-app donations directly supporting artists, non-intrusive sponsorships and partnerships, and grants from philanthropic organizations. this guarantees that eos stays true to making music accessible while supporting creators and users globally.
to expand on existing sounds of music and entertainment, the service implemented dolby atmos to its highest quality with no extra price to artists for using it.
eos also features eos karaoke: sing it your way. within this are lyrics that are displayed in perfect sync with customizable highlighting options, while users can adjust the original vocals (mute, reduce, or add harmonies) for a personalized performance. advanced pitch tracking gives real-time feedback, along with timing guides and a practice mode for perfecting songs. voice effects like reverb, autotune, and fun filters are available to take things to a higher level or have fun with friends and family. dual-screen mode connects to tvs for party setups and users can enjoy duets with friends or group singing for up to 10 people. karaoke playlists are curated based on mood, listening habits, or vocal range. sessions can be recorded and shared with friends or on social media. a scoring system with feedback helps users improve their singing while earning fun badges. eos karaoke also offers live virtual karaoke rooms, offline mode for downloaded tracks, and customizable themes for lyric screens and backgrounds. a special kid-friendly mode ensures the fun for all age groups.
eos allows music and entertainment to be accessible, fun, and, immersive to everyone.
Tumblr media
358 notes · View notes
cushfuddled · 6 months ago
Text
I saw a post which claimed since Americans are spending unprecedented amounts of money on holiday gifts this year [1][2]...
the American public isn't actually as strapped for cash as we say or think we are, and
Americans didn’t vote for Trump out of economic frustration.
Like.
I hope you guys know Americans aren't splurging on gifts because we can afford to do so.
The majority of Americans live paycheck to paycheck [3], and yet vacations are on the rise, with Millennials and Gen Z-ers at the front of the trend. [4][5]
It’s not excess capital. It’s nihilism.[6][7][8]
"If you work hard and save your money, someday you can buy a house/raise a family/retire." So goes the conventional wisdom, now fine viscera under the wheels of an Amazon forklift. Even older generations can't afford to retire these days [9]. You can buy a shed for the price of a master's degree. And how are you supposed to raise a child when your full-time job barely covers your grocery bills?
Knowing they'll never travel as a retiree, people are splurging on plane tickets right out of school. Knowing class mobility is a lottery pull, people are dumping their last few pennies into meme tokens and other get-rich-quick schemes. Knowing they're already saddled with lifelong debt, people are saying "fuck it" and grabbing a shovel—because at this point, what's a car payment on top of every other loan they'll never repay? "Things will keep getting worse anyway."
Americans are spending stupid amounts of money on vacations [10] and extravagant gifts [11], yes—but they're not spending THEIR money. They're spending Klarna's money, and the bank's money, and when the bills come due people aren't paying them. We're all just doing kickflips on our way down the drain.
The question, "How does killing the UnitedHealthCare CEO solve anything?" misses the point. The shooter may have believed he was doing the American people a favor, but I don't think the majority of Americans are cheering on Brian's death because they believe it will manifest universal healthcare. It's just nice to see the rich criminals who profit off our pain suffer for their choices.
Even if the Dems had acknowledged our financial straits (I find Atrioc's video "Slowly, Then All at Once" to be very helpful re: why the numbers look good but nobody can afford to live)...I still don't think Kamala Harris would've won the presidency. Again, Americans don't believe progress is possible anymore—at least not via our current system of government. Extremists are banking on a wholesale descent into anarchy. Your everyday worker is distracting themselves from impending financial implosion with daily Beverages (I'm stopping here to take a sip of my Rockstar energy drink). Hope is a heavy burden. Instead, people keep their eyes on their feet. One day at a time. Sometimes on its way to the brick wall, their speeding car hits a CEO—and sometimes it mows down a crowd of schoolchildren. Sometimes we're all just trashing the bathroom.
That's Donald Trump's presidential win, to me. Let the horse take over the hospital, America declared—why not, if none of us can afford a hospital visit anyway. Let the nation descend into anarchy and fascism—why not, if we never had rights/liberty to begin with.
It's not logical. It's lashing out in pain like a cornered animal.
The rule of law doesn't apply to the wealthy, as emblemized by our incumbent president's 34 felony charges. It punishes the marginalized by design, for the benefit of corrupt institutions. Harris would've given us a chance to get back on our feet...but with her centrist prosecutorial approach, she represents the law. Donald Trump represents chaos. He's a champion of the CEOs who bankrupt and maim and kill us, but as a certifiable toddler with no object permanence and a suitcase full of ketchup packets and nuclear launch codes, he's also a fucking nightmare to babysit around the White House. That's the best some people can hope for in this country: To give their tormentors a headache. To "trigger the libs." To treat their representatives to the smallest taste of their own helplessness and hopelessness and fear and anger and pain.
People do not have money. People do not have hope. People do not have compassion.
I don't feel any sympathy for Trump voters, and I don't mean to minimize the role of bigotry in this election. This country was founded on genocide and slavery, and that legacy still permeates our culture. I only mean to explain—not excuse—some of this group's behavior. It's a trend suffered on all sides of the aisle: Nihilism externalized as sabotage, whether directed at oneself or others. People are so sick of watching this boat sink into the ocean they've set it on fire just to feel like they had a say in it.
87 notes · View notes
hardcore-gaming-101 · 7 months ago
Text
Tumblr media
Simple 1500 Series Vol. 57 - The Maze
This article is part of our Japanese Obscurities feature. We put out a whole book about them, which is available as both a full color hardcover and a Kindle ebook from Amazon! If you’d like to see more of these features, please check out the book and if you enjoyed it, leave a five star review so we can do a follow up with even more interesting, offbeat, or historically important Japanese games!
As the name implies, the 57th volume of the Simple 1500 Series is an original game that takes place entirely within randomized mazes. Consisting entirely of 1v1 battles, each player is tasked with grabbing three keys and making it to the exit before their opponent can. The keys are color-coded and players need to grab the ones that correspond to their color in order to unlock their respective exit. However, one of the keys a player needs is always held by the opposite player at the start, forcing them to interact in every match. To aid in their mission, each player has melee attacks, a bazooka for slow ranged attacks, and an item menu that lets them place traps and use/drop items they’ve found. It’s possible to knock out the other player, especially if you can lure them into an explosive trap, but doing so is only a minor setback, so traps that waste their time like sticky slime puddles generally have more impact.
Read more...
85 notes · View notes
jcmarchi · 19 days ago
Text
Soham Mazumdar, Co-Founder & CEO of WisdomAI – Interview Series
New Post has been published on https://thedigitalinsider.com/soham-mazumdar-co-founder-ceo-of-wisdomai-interview-series/
Soham Mazumdar, Co-Founder & CEO of WisdomAI – Interview Series
Tumblr media Tumblr media
Soham Mazumdar is the Co-Founder and CEO of WisdomAI, a company at the forefront of AI-driven solutions. Prior to founding WisdomAI in 2023, he was Co-Founder and Chief Architect at Rubrik, where he played a key role in scaling the company over a 9-year period. Soham previously held engineering leadership roles at Facebook and Google, where he contributed to core search infrastructure and was recognized with the Google Founder’s Award. He also co-founded Tagtile, a mobile loyalty platform acquired by Facebook. With two decades of experience in software architecture and AI innovation, Soham is a seasoned entrepreneur and technologist based in the San Francisco Bay Area.
WisdomAI is an AI-native business intelligence platform that helps enterprises access real-time, accurate insights by integrating structured and unstructured data through its proprietary “Knowledge Fabric.” The platform powers specialized AI agents that curate data context, answer business questions in natural language, and proactively surface trends or risks—without generating hallucinated content. Unlike traditional BI tools, WisdomAI uses generative AI strictly for query generation, ensuring high accuracy and reliability. It integrates with existing data ecosystems and supports enterprise-grade security, with early adoption by major firms like Cisco and ConocoPhillips.
You co-founded Rubrik and helped scale it into a major enterprise success. What inspired you to leave in 2023 and build WisdomAI—and was there a particular moment that clarified this new direction?
The enterprise data inefficiency problem was staring me right in the face. During my time at Rubrik, I witnessed firsthand how Fortune 500 companies were drowning in data but starving for insights. Even with all the infrastructure we built, less than 20% of enterprise users actually had the right access and know-how to use data effectively in their daily work. It was a massive, systemic problem that no one was really solving.
I’m also a builder by nature – you can see it in my path from Google to Tagtile to Rubrik and now WisdomAI. I get energized by taking on fundamental challenges and building solutions from the ground up. After helping scale Rubrik to enterprise success, I felt that entrepreneurial pull again to tackle something equally ambitious.
Last but not least, the AI opportunity was impossible to ignore. By 2023, it became clear that AI could finally bridge that gap between data availability and data usability. The timing felt perfect to build something that could democratize data insights for every enterprise user, not just the technical few.
The moment of clarity came when I realized we could combine everything I’d learned about enterprise data infrastructure at Rubrik with the transformative potential of AI to solve this fundamental inefficiency problem.
WisdomAI introduces a “Knowledge Fabric” and a suite of AI agents. Can you break down how this system works together to move beyond traditional BI dashboards?
We’ve built an agentic data insights platform that works with data where it is – structured, unstructured, and even “dirty” data. Rather than asking analytics teams to run reports, business managers can directly ask questions and drill into details. Our platform can be trained on any data warehousing system by analyzing query logs.
We’re compatible with major cloud data services like Snowflake, Microsoft Fabric, Google’s BigQuery, Amazon’s Redshift, Databricks, and Postgres and also just document formats like excel, PDF, powerpoint etc.
Unlike conventional tools designed primarily for analysts, our conversational interface empowers business users to get answers directly, while our multi-agent architecture enables complex queries across diverse data systems.
You’ve emphasized that WisdomAI avoids hallucinations by separating GenAI from answer generation. Can you explain how your system uses GenAI differently—and why that matters for enterprise trust?
Our AI-Ready Context Model trains on the organization’s data to create a universal context understanding that answers questions with high semantic accuracy while maintaining data privacy and governance. Furthermore, we use generative AI to formulate well-scoped queries that allow us to extract data from the different systems, as opposed to feeding raw data into the LLMs. This is crucial for addressing hallucination and safety concerns with LLMs.
You coined the term “Agentic Data Insights Platform.” How is agentic intelligence different from traditional analytics tools or even standard LLM-based assistants?
Traditional BI stacks slow decision-making because every question has to fight its way through disconnected data silos and a relay team of specialists. When a chief revenue officer needs to know how to close the quarter, the answer typically passes through half a dozen hands—analysts wrangling CRM extracts, data engineers stitching files together, and dashboard builders refreshing reports—turning a simple query into a multi-day project.
Our platform breaks down those silos and puts the full depth of data one keystroke away, so the CRO can drill from headline metrics all the way to row-level detail in seconds.
No waiting in the analyst queue, no predefined dashboards that can’t keep up with new questions—just true self-service insights delivered at the speed the business moves.
How do you ensure WisdomAI adapts to the unique data vocabulary and structure of each enterprise? What role does human input play in refining the Knowledge Fabric?
Working with data where and how it is – that’s essentially the holy grail for enterprise business intelligence. Traditional systems aren’t built to handle unstructured data or “dirty” data with typos and errors. When information exists across varied sources – databases, documents, telemetry data – organizations struggle to integrate this information cohesively.
Without capabilities to handle these diverse data types, valuable context remains isolated in separate systems. Our platform can be trained on any data warehousing system by analyzing query logs, allowing it to adapt to each organization’s unique data vocabulary and structure.
You’ve described WisdomAI’s development process as ‘vibe coding’—building product experiences directly in code first, then iterating through real-world use. What advantages has this approach given you compared to traditional product design?
“Vibe coding” is a significant shift in how software is built where developers leverage the power of AI tools to generate code simply by describing the desired functionality in natural language. It’s like an intelligent assistant that does what you want the software to do, and it writes the code for you. This dramatically reduces the manual effort and time traditionally required for coding.
For years, the creation of digital products has largely followed a familiar script: meticulously plan the product and UX design, then execute the development, and iterate based on feedback. The logic was clear because investing in design upfront minimizes costly rework during the more expensive and time-consuming development phase. But what happens when the cost and time to execute that development drastically shrinks? This capability flips the traditional development sequence on its head. Suddenly, developers can start building functional software based on a high-level understanding of the requirements, even before detailed product and UX designs are finalized.
With the speed of AI code generation, the effort involved in creating exhaustive upfront designs can, in certain contexts, become relatively more time-consuming than getting a basic, functional version of the software up and running. The new paradigm in the world of vibe coding becomes: execute (code with AI), then adapt (design and refine).
This approach allows for incredibly early user validation of the core concepts. Imagine getting feedback on the actual functionality of a feature before investing heavily in detailed visual designs. This can lead to more user-centric designs, as the design process is directly informed by how users interact with a tangible product.
At WisdomAI, we actively embrace AI code generation. We’ve found that by embracing rapid initial development, we can quickly test core functionalities and gather invaluable user feedback early in the process, live on the product. This allows our design team to then focus on refining the user experience and visual design based on real-world usage, leading to more effective and user-loved products, faster.
From sales and marketing to manufacturing and customer success, WisdomAI targets a wide spectrum of business use cases. Which verticals have seen the fastest adoption—and what use cases have surprised you in their impact?
We’ve seen transformative results with multiple customers. For F500 oil and gas company, ConocoPhillips, drilling engineers and operators now use our platform to query complex well data directly in natural language. Before WisdomAI, these engineers needed technical help for even basic operational questions about well status or job performance. Now they can instantly access this information while simultaneously comparing against best practices in their drilling manuals—all through the same conversational interface. They evaluated numerous AI vendors in a six-month process, and our solution delivered a 50% accuracy improvement over the closest competitor.
At a hyper growth Cyber Security company Descope, WisdomAI is used as a virtual data analyst for Sales and Finance. We reduced report creation time from 2-3 days to just 2-3 hours—a 90% decrease. This transformed their weekly sales meetings from data-gathering exercises to strategy sessions focused on actionable insights. As their CRO notes, “Wisdom AI brings data to my fingertips. It really democratizes the data, bringing me the power to go answer questions and move on with my day, rather than define your question, wait for somebody to build that answer, and then get it in 5 days.” This ability to make data-driven decisions with unprecedented speed has been particularly crucial for a fast-growing company in the competitive identity management market.
A practical example: A chief revenue officer asks, “How am I going to close my quarter?” Our platform immediately offers a list of pending deals to focus on, along with information on what’s delaying each one – such as specific questions customers are waiting to have answered. This happens with five keystrokes instead of five specialists and days of delay.
Many companies today are overloaded with dashboards, reports, and siloed tools. What are the most common misconceptions enterprises have about business intelligence today?
Organizations sit on troves of information yet struggle to leverage this data for quick decision-making. The challenge isn’t just about having data, but working with it in its natural state – which often includes “dirty” data not cleaned of typos or errors. Companies invest heavily in infrastructure but face bottlenecks with rigid dashboards, poor data hygiene, and siloed information. Most enterprises need specialized teams to run reports, creating significant delays when business leaders need answers quickly. The interface where people consume data remains outdated despite advancements in cloud data engines and data science.
Do you view WisdomAI as augmenting or eventually replacing existing BI tools like Tableau or Looker? How do you fit into the broader enterprise data stack?
We’re compatible with major cloud data services like Snowflake, Microsoft Fabric, Google’s BigQuery, Amazon’s Redshift, Databricks, and Postgres and also just document formats like excel, PDF, powerpoint etc. Our approach transforms the interface where people consume data, which has remained outdated despite advancements in cloud data engines and data science.
Looking ahead, where do you see WisdomAI in five years—and how do you see the concept of “agentic intelligence” evolving across the enterprise landscape?
The future of analytics is moving from specialist-driven reports to self-service intelligence accessible to everyone. BI tools have been around for 20+ years, but adoption hasn’t even reached 20% of company employees. Meanwhile, in just twelve months, 60% of workplace users adopted ChatGPT, many using it for data analysis. This dramatic difference shows the potential for conversational interfaces to increase adoption.
We’re seeing a fundamental shift where all employees can directly interrogate data without technical skills. The future will combine the computational power of AI with natural human interaction, allowing insights to find users proactively rather than requiring them to hunt through dashboards.
Thank you for the great interview, readers who wish to learn more should visit WisdomAI.
0 notes
okitanoniisan · 9 months ago
Text
i'll be honest, the way everyone in the rgg fandom seems to be purposefully ignoring red flags and holding out hope that the amazon live action series (adapted by a couple of americans who have not played the entire series and also seemingly have not played yakuza 0 nor anything past 6) is like. where am i.
when the disco elysium amazon series got announced, the entire fandom was immediately like. Fuck This, this goes against the very essence of the game, there's NO way they'll get this right, they're going to censor and tone down and pander to the broadest audience. the rgg fandom is somehow doing almost the exact opposite and displaying the most bizarre blind faith.
it's okay, you can say this thing is gonna suck shit, the warning sirens have been blaring for months, they clearly do not respect the source material or video games as a vessel for storytelling, the people behind it are literally money-hungry american capitalist crypto bros who recognized a business opportunity and are hellbent on making some low effort schlock for Consumption.
complain. please. i'm begging you all.
i have not seen a single promising statement from cast and crew and i feel like i'm going insane with how forgiving everyone is. are standards this low?? i don't understand how anyone can be okay with this when everything that makes rgg what it is has been stripped out and yet everyone behind this series continues to act as if they've cracked the code to make it all Better, managed to rewrite everything by making it more generic and stereotypical when the fucking games go on to be so much more than your average yakuza action flick where the cool, strong, badass everyman does excessive violence.
111 notes · View notes
tangentiallly · 6 months ago
Text
One way to spot patterns is to show AI models millions of labelled examples. This method requires humans to painstakingly label all this data so they can be analysed by computers. Without them, the algorithms that underpin self-driving cars or facial recognition remain blind. They cannot learn patterns.
The algorithms built in this way now augment or stand in for human judgement in areas as varied as medicine, criminal justice, social welfare and mortgage and loan decisions. Generative AI, the latest iteration of AI software, can create words, code and images. This has transformed them into creative assistants, helping teachers, financial advisers, lawyers, artists and programmers to co-create original works.
To build AI, Silicon Valley’s most illustrious companies are fighting over the limited talent of computer scientists in their backyard, paying hundreds of thousands of dollars to a newly minted Ph.D. But to train and deploy them using real-world data, these same companies have turned to the likes of Sama, and their veritable armies of low-wage workers with basic digital literacy, but no stable employment.
Sama isn’t the only service of its kind globally. Start-ups such as Scale AI, Appen, Hive Micro, iMerit and Mighty AI (now owned by Uber), and more traditional IT companies such as Accenture and Wipro are all part of this growing industry estimated to be worth $17bn by 2030.
Because of the sheer volume of data that AI companies need to be labelled, most start-ups outsource their services to lower-income countries where hundreds of workers like Ian and Benja are paid to sift and interpret data that trains AI systems.
Displaced Syrian doctors train medical software that helps diagnose prostate cancer in Britain. Out-of-work college graduates in recession-hit Venezuela categorize fashion products for e-commerce sites. Impoverished women in Kolkata’s Metiabruz, a poor Muslim neighbourhood, have labelled voice clips for Amazon’s Echo speaker. Their work couches a badly kept secret about so-called artificial intelligence systems – that the technology does not ‘learn’ independently, and it needs humans, millions of them, to power it. Data workers are the invaluable human links in the global AI supply chain.
This workforce is largely fragmented, and made up of the most precarious workers in society: disadvantaged youth, women with dependents, minorities, migrants and refugees. The stated goal of AI companies and the outsourcers they work with is to include these communities in the digital revolution, giving them stable and ethical employment despite their precarity. Yet, as I came to discover, data workers are as precarious as factory workers, their labour is largely ghost work and they remain an undervalued bedrock of the AI industry.
As this community emerges from the shadows, journalists and academics are beginning to understand how these globally dispersed workers impact our daily lives: the wildly popular content generated by AI chatbots like ChatGPT, the content we scroll through on TikTok, Instagram and YouTube, the items we browse when shopping online, the vehicles we drive, even the food we eat, it’s all sorted, labelled and categorized with the help of data workers.
Milagros Miceli, an Argentinian researcher based in Berlin, studies the ethnography of data work in the developing world. When she started out, she couldn’t find anything about the lived experience of AI labourers, nothing about who these people actually were and what their work was like. ‘As a sociologist, I felt it was a big gap,’ she says. ‘There are few who are putting a face to those people: who are they and how do they do their jobs, what do their work practices involve? And what are the labour conditions that they are subject to?’
Miceli was right – it was hard to find a company that would allow me access to its data labourers with minimal interference. Secrecy is often written into their contracts in the form of non-disclosure agreements that forbid direct contact with clients and public disclosure of clients’ names. This is usually imposed by clients rather than the outsourcing companies. For instance, Facebook-owner Meta, who is a client of Sama, asks workers to sign a non-disclosure agreement. Often, workers may not even know who their client is, what type of algorithmic system they are working on, or what their counterparts in other parts of the world are paid for the same job.
The arrangements of a company like Sama – low wages, secrecy, extraction of labour from vulnerable communities – is veered towards inequality. After all, this is ultimately affordable labour. Providing employment to minorities and slum youth may be empowering and uplifting to a point, but these workers are also comparatively inexpensive, with almost no relative bargaining power, leverage or resources to rebel.
Even the objective of data-labelling work felt extractive: it trains AI systems, which will eventually replace the very humans doing the training. But of the dozens of workers I spoke to over the course of two years, not one was aware of the implications of training their replacements, that they were being paid to hasten their own obsolescence.
— Madhumita Murgia, Code Dependent: Living in the Shadow of AI
71 notes · View notes
makemoneyonline0o1 · 2 years ago
Text
Tumblr media
0 notes
amphibianauthor · 7 months ago
Text
Ao3 HTML/Coding Resources Part II
This is the HTML/Coding for Website mimicking resources in Archive of our Own (Ao3). To find Part I where I go over the Basics, General Text HTML and some Fancy Formatting (images, dividers, columns, photos, tabs etc. CLICK HERE!
Other Websites:
Texting
-How to make iOS Text Messages on Ao3 by CodenameCarrot, La_Temperanza
-A Quick Generator for Embeddable iOS Text Messages by 221b_ee
-imessage Skin by Adzaema
-Retro imessage by Adzaema
-Basic Text Message Work Skin by ProfessorMotz
- Bubble platform [workskin] by Khashana
-Chat Log HTML by deathbymistletoe
-LINE Messenger/Chat by imperiousmarshmellow
-IDOLish Rabbit Chat Workskin by associate
-Replika workskin by FaeriMagic
-Texting Workskin to match light/dark mode by irrationalpie
Tumblr
-Tumblr style CSS Tweaks by Aposiopesis
-Ao3 Workskin Testing and Tutorials by junietuesday25 tumblr DM
-How to make Tumblr Posts on Ao3 by phyyripo
-Plain Text Social Media Platforms by anubisms
-Tumblr Post Work Skin by tsukinosaugi
Twitter
-Repository - Twitter by gadaursan
- How to mimic Social Media in an Ao3 work by aerynevenstar
-Twitter Work Skin Template by etc e tal
-Twitter Workskin: Tweets and Profile by starskin
-Twitter Mock-Up by TheBrookesNook
Ao3/Fandom
-How to mimic Authors notes and Kudos/Comment Buttons by La_Temperanza
-How to mimic AO3 Comments by bittermoons
-How to add mobile Ao3 in your fic by DemigodofAgni
-How to make a fanfic style header Ao3 style by ElectricAlice
-Template for adding post chapter content by SpookyTesting
-CSS based full Ao3 fic integration (Header/Overview, Comments, Title, Summary & Buttons) by deciMae
-How to Mimic LiveJournal Posts and Comments by cursedcuriosities
-Dreamwidth Entries & Comments Work Skin  by folk_melody 
Facebook/Instagram/Whatsapp
-Whatsapp Group Chat builder by FestiveFerret
-How to make Facebook Messenger Chat on Ao3 by ran_a_dom
-Whatsapp Work Skin Template Revamped by etc e tal
-Whatsapp group chat skin by ovely
-Instagram DMs for Ao3 by monarch_rhapsodies
-How to make Instagram DM mockup by xslytherclawx
-Penstagram chats on ao3 by deciMae
Snapchat
-Snapchat skin by Azdaema
-Snapchat Template for Ao3 by starskin
Reddit/Forum
-UPDATED Reddit Skin by diamine
-2020 Reddit Work Skin by timstokerlovebot
-Reddit Work Skin CSS & HTML by knave_of_swords
-How to mimic Social Media in an Ao3 work by aerynevenstar
-template Reddit Skin by spookedcroon
-template:Subreddit page by ireseen
-Ao3 workskin for Forum Thread by fencesit
-Ao3 workskin for Forum Thread [Expansion Pack] by AMereDream
-How to mimic 4chan posts without just taking screenshots of 4chan
Twitch/Youtube
-Mimicking Twitch Chat for fics by Ultraviollett
-Twitch Chat Work Skin by cherrari
-Workskin testing by tohmas [Youtube comments]
-Youtube Work Skin by 1864s
-Youtube Comment Section Workskin by LupaMoe
Discord/Slack/Zoom
-2023 Discord Theme Workskin by TrojanTeapot
-Discord Work Skin by unpredictableArtist
-Discord (Dark Theme) Workskin by Heterochromia_Mars
-Skin for Recreating Discord’s Server Member List by SpookyTesting
-Ao3 Workskin Testing and Tutorials by junietuesday25
-Slack Workskin by Khashana
-Zoom inspired Ao3 skin by mystyrust
Wikipedia 
-Fake Wikipedia article about a TV show: Work Skin by Anonymous 
-Wikipedia article work skin by styletests
-SCP Wiki Style Workskin by thesnager
Working Games in Ao3 Tutorials
Logic Grid Puzzle Work Skin & Tutorial by BookKeep
The Case Of The Clickable Murdle by VThinksOn
Review Sites:
Yelp Reviews by kiwiana
Amazon Reviews by kiwiana
Rate My Professor Work Skin by BookKeep
Video Game Dialog Mimics
-Dialog [workskin] by Clover_Zero
-Dialogue Workskin (with parallax BG effect) by mystyrust
-My S Ranks--System Windows by unpredictableArtist [computer dialog workskin]
-Tutorial: Ace Attorney Work Skin by QuailFence
-Among Us Ao3 skin by mystyrust
-How to Mimic Undertale Fonts on Ao3 by La_Temperanza
-Tutorial:Rain Code Work Skin by faish
-Balder's Gate 3 Documents Work Skin by Professor_Rye
-SpookyTesting has SOO many Nintendo based ones
–Mimicking Minecraft for some fics by Ultraviollett 
Runescape Right Click Menu Formatting by fennfics
How to put Z skits in your Tales fics by wingedcatgirl
How to make Honkai: Star Rail Messages by html_hell (jihnari)
Hold-hands inspired Texting skin by cursedcuriocities(SetsuntaMew)
Simple Linkshell Ao3 Work Skin by Pent – Final Fantasy XIV mimic
Homestuck Chat Clients by 77angel-skins
Workskin: Slay the Princess by ASpooky
Slay the Princess: Updated Workskin by Lilto
Misc. Sites
--How to mimic Deadpool Thinking boxes by La_Temperanza
--FetLife Skin [Work Skin] by Khashana
--Disco Elysium workskin by SarunoHadaki
--StarTrek PADD workskin by duskyspirit
--MDZS-themed letters by allollipoppins
--A Newbie's Guide to Podficcing by Adzaema [skin for podfics]
--Skin for making Character Intro Cards by SpookyTesting
--Kpop Photocards by legonerd
–OVR System Workskin by unpredictableArtist
-How to make Stylized CSS Card Links for your fics by buttertartz
-vroom vroom kachow: Formula1 Race Results Workskin by mackerel_cheese
Bonus: Ever wanted to see how crazy HTML can be on AO3? Try playing But can it run Doom? or Tropémon by gifbot
Happy Creating!
Last updated: Feb 8 2025 (Have a resource that you want to share? My inbox is open!)
View Part I with HTML Basics HERE!
61 notes · View notes
mostlysignssomeportents · 7 months ago
Text
“That Makes Me Smart”
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/12/04/its-not-a-lie/#its-a-premature-truth
Tumblr media
The Biden administration disappointed, frustrated and enraged in so many ways, including abetting a genocide – but one consistent bright spot over the past four years was the unseen-for-generations frontal assault on corporate power and corporate corruption.
The three words that define this battle above all others are "unfair and deceptive" – words that appear in Section 5 of the Federal Trade Commission Act and other legislation modeled on it, like USC40 Section 41712(a), which gives the Department of Transportation the power to ban "unfair and deceptive" practices as well:
https://pluralistic.net/2023/01/10/the-courage-to-govern/#whos-in-charge
When Congress created an agency to punish "unfair and deceptive" conduct, they were saying to the American people, "You have a right not to be cheated." While this may sound obvious, it's hardly how the world works.
To get a sense of how many ripoffs are part of our daily lives, let's take a little tour of the ways that the FTC and other agencies have used the "unfair and deceptive" standard to defend you over the past four years. Take Amazon Prime: Amazon executives emailed one another, openly admitting that in their user tests, the public was consistently fooled by Amazon's "get free shipping with Prime" dialog boxes, thinking they were signing up for free shipping and not understanding that they were actually signing up to send the company $140/year. They had tested other versions of the signup workflow that users were able to correctly interpret, but they decided to go with the confusing version because it made them more money:
https://arstechnica.com/tech-policy/2024/05/amazon-execs-may-be-personally-liable-for-tricking-users-into-prime-sign-ups/
Getting you signed up for Prime isn't just a matter of taking $140 out of your pocket once – because while Amazon has produced a greased slide that whisks you into a recurring Prime subscription, the process for canceling that recurring payment is more like a greased pole you must climb to escape the Prime pit. This is typical of many services, where signing up happens in a couple clicks, but canceling is a Kafkaesque nightmare. The FTC decided that this was an "unfair and deceptive" business practice and used its authority to create a "Click to Cancel" rule that says businesses have to make it as easy to cancel a recurring payment as it was to sign up for it:
https://www.theregister.com/2023/07/12/ftc_cancel_subscriptions/
Once businesses have you locked in, they also spy on you, ingesting masses of commercial surveillance data that you "consented" to by buying a car, or clicking to a website, or installing an app, or just physically existing in space. They use this to implement "surveillance pricing," raising prices based on their estimation of your desperation. Uber got caught doing this a decade ago, raising the price of taxi rides for users whose batteries were about to die, but these days, everyone's in on the game. For example, McDonald's has invested in a company that spies on your finances to determine when your payday is, and then raises the price of your usual breakfast sandwich by a dollar the day you get paid:
https://pluralistic.net/2024/06/05/your-price-named/#privacy-first-again
Everything about this is "unfair and deceptive" – from switching prices the second you click into the store to the sham of consent that consists of, say, picking up your tickets to a show and being ordered to download an app that comes with 20,000 words of terms and conditions that allows the company that sends you a QR code to spy on you for the rest of your life in any way they can and sell the data to anyone who'll buy it.
As bad as it is to be trapped in an abusive relationship as a shopper, it's a million times worse to be trapped as a worker. One in 18 American workers is under a noncompete "agreement" that makes it illegal for you to change jobs and work for someone else in the same industry. The vast majority of these workers are in low-waged food-service jobs. The primary use of the American noncompete is to stop the cashier at Wendy's from getting an extra $0.25/hour by taking a job at McDonald's.
Noncompetes are shrouded in a fog of easily dispelled bossly bullshit: claims that noncompetes raise wages (empirically, this is untrue), or that they enable "IP"-intensive industries to grow by protecting their trade secrets. This claim is such bullshit: you can tell by the fact that noncompetes are banned under California's state constitution and yet the most IP-intensive industries have attracted hundreds of billions – if not trillions – in investment capital even though none of their workforce can be bound under a noncompete. The FTC's order banning noncompetes for every worker in America simply brings the labor regime that created Silicon Valley and Hollywood to the rest of the country:
https://pluralistic.net/2023/10/26/hit-with-a-brick/#graceful-failure
Noncompetes aren't the only "unfair and deceptive" practice used against American workers. The past decade has seen the rise of private equity consolidation in several low-waged industries, like pet grooming. The new owners of every pet grooming salon within 20 miles of your house haven't just slashed workers' wages, they've also cooked up a scheme that lets them charge workers thousands of dollars if they quit these shitty jobs. This scheme is called a "training repayment agreement provision" (TRAP!): workers who are TRAPped at Petsmart are made to work doing menial jobs like sweeping up the floor for three to four weeks. Petsmart calls this "training," and values it at $5,500. If you quit your pet grooming job in the next two years, you legally owe PetSmart $5,500 to "repay" them for the training:
https://pluralistic.net/2022/08/04/its-a-trap/#a-little-on-the-nose
Workers are also subjected to "unfair and deceptive" bossware: "AI" tools sold to bosses that claim they can sort good workers from bad, but actually serve as random-number generators that penalize workers in arbitrary, life-destroying ways:
https://pluralistic.net/2024/11/26/hawtch-hawtch/#you-treasure-what-you-measure
Some of the most "unfair and deceptive" conduct we endure happens in shadowy corners of industry, where obscure middlemen help consolidated industries raise prices and pick your pocket. All the meat you buy in the grocery store comes from a cartel of processing and packing companies that all subscribe to the same "price consulting" services that tells them how to coordinate across-the-board price rises (tell me again how greedflation isn't a thing?):
https://pluralistic.net/2023/10/04/dont-let-your-meat-loaf/#meaty-beaty-big-and-bouncy
It's not just food, it's all of Maslow's Hierarchy of Needs. Take shelter: the highly consolidated landlord industry uses apps like Realpage to coordinate rental price hikes, turning the housing crisis into a housing emergency:
https://pluralistic.net/2024/07/24/gouging-the-all-seeing-eye/#i-spy
And of course, health is the most "unfair and deceptive" industry of all. Useless middlemen like "Pharmacy Benefit Managers" ("a spreadsheet with political power" -Matt Stoller) coordinate massive price-hikes in the drugs you need to stay alive, which is why Americans pay substantially more for medicine than anyone else in the world, even as the US government spends more than any other to fund pharma research, using public money:
https://pluralistic.net/2024/09/23/shield-of-boringness/#some-men-rob-you-with-a-fountain-pen
It's not just drugs: every piece of equipment – think hospital beds and nuclear medicine machines – as well as all the consumables – from bandages to saline – at your local hospital runs through a cartel of "Group Purchasing Organizations" that do for hospital equipment what PBMs do for medicine:
https://pluralistic.net/2021/09/27/lethal-dysfunction/#luxury-bones
For the past four years, we've lived in an America where a substantial portion of the administrative state went to war every day to stamp out unfair and deceptive practices. It's still happening: yesterday, the CFPB (which Musk has vowed to shut down) proposed a new rule that would ban the entire data brokerage industry, who nonconsensually harvest information about every American, and package it up into categories like "teenagers from red states seeking abortions" and "military service personnel with gambling habits" and "seniors with dementia" and sell this to marketers, stalkers, foreign governments and anyone else with a credit-card:
https://www.consumerfinance.gov/about-us/newsroom/cfpb-proposes-rule-to-stop-data-brokers-from-selling-sensitive-personal-data-to-scammers-stalkers-and-spies/
And on the same day, the FTC banned the location brokers who spy on your every movement and sell your past and present location, again, to marketers, stalkers, foreign governments and anyone with a credit card:
https://www.404media.co/ftc-bans-location-data-company-that-powers-the-surveillance-ecosystem/
These are tantalizing previews of a better life for every American, one in which the rule is, "play fair." That's not the world that Trump and his allies want to build. Their motto isn't "cheaters never prosper" – it's "caveat emptor," let the buyer beware.
Remember the 2016 debate where Clinton accused Trump of cheating on his taxes and he admitted to it, saying "That makes me smart?" Trumpism is the movement of "that makes me smart" life, where if you get scammed, that's your own damned fault. Sorry, loser, you lost.
Nowhere do you see this more than in cryptocurrencyland, so it's not a coincidence that tens – perhaps hundreds – in dark crypto money was flushed into the election, first to overpower Democratic primaries and kick out Dem legislators who'd used their power to fight the "unfair and deceptive" crowd:
https://www.politico.com/newsletters/california-playbook-pm/2024/02/13/crypto-comes-for-katie-porter-00141261
And then to fight Dems across the board (even the Dems whose primary victories were funded by dark crypto money) and elect the GOP as the party of "caveat emptor"/"that makes me smart":
https://www.coindesk.com/news-analysis/2024/12/02/crypto-cash-fueled-53-members-of-the-next-u-s-congress
Crypto epitomizes the caveat emptor economy. By design, fraudulent crypto transactions can't be reversed. If you get suckered, that's canonically a you problem. And boy oh boy, do crypto users get suckered (including and especially those who buy Trump's shitcoins):
https://www.web3isgoinggreat.com/
And for crypto users who get ripped off because they've parked their "money" in an online wallet, there's no sympathy, just "not your keys, not your coins":
https://www.ledger.com/academy/not-your-keys-not-your-coins-why-it-matters
A cornerstone of the "unfair and deceptive" world is that only suckers – that is, outsiders, marks and little people – have to endure consequences when they get rooked. When insiders get ripped off, all principle is jettisoned. So it's not surprising that when crypto insiders got taken for millions the first time they created a DAO, they tore up all the rules of the crypto world and gave themselves the mulligan that none of the rest of us are entitled to in cryptoland:
https://blog.ethereum.org/2016/07/20/hard-fork-completed
Where you find crypto, you find Elon Musk, the guy who epitomizes caveat emptor thinking. This is a guy who has lied to drivers to get them to buy Teslas by promising "full self driving in one year," every year, since 2015:
https://www.consumerreports.org/cars/autonomous-driving/timeline-of-tesla-self-driving-aspirations-a9686689375/
Musk told investors that he had a "prototype" autonomous robot that could replace their workers, then demoed a guy in a robot suit, pretending to be a robot:
https://gizmodo.com/elon-musk-unveils-his-funniest-vaporware-yet-1847523016
Then Musk did it again, two years later, demoing a remote-control robot while lying and claiming that it was autonomous:
https://techcrunch.com/2024/10/14/tesla-optimus-bots-were-controlled-by-humans-during-the-we-robot-event
This is entirely typical of the AI sector, in which "AIs" are revealed, over and over, to be low-waged workers pretending to be robots, so much so that Indian tech industry insiders joke that "AI" stands for "Absent Indians":
https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain
Musk's view is that he's not a liar, merely a teller of premature truths. Autonomous cars and robots are just around the corner (just like the chatbots that can do your job, and not merely convince your boss to fire you while failing to do your job). He's not tricking you, he's just faking it until he makes it. It's not a scam, it's inspirational. Of course, if he's wrong and you are scammed, well, that's a you problem. Caveat emptor. That makes him smart.
Musk does this all the time. Take the Twitter blue tick, originally conceived of as a way to keep Twitter users from being scammed ("unfair and deceptive") by con artists pretending to be famous people. Musk's inaugural act at Twitter was to take away blue ticks from verified users and sell them to anyone who'd pay $8/month. Almost no one coughed up for this – the main exception being scammers, who used their purchased, unverified blue ticks to steal from Twitter users ("that makes me smart").
As Twitter hemorrhaged advertising revenue and Musk became increasingly desperate to materialize an army of $8/month paid subscribers, he pulled another scam: he nonconsensually applied blue ticks to prominent accounts, in a bid to trick normies into thinking that widely read people valued blue ticks so much they were paying for them out of their own pockets:
https://www.bbc.com/news/technology-65365366
If you were tricked into buying a blue tick on this pretense, well, caveat emptor. Besides, it's not a lie, it's a premature truth. Someday all those widely read users with nonconsensual blue ticks will surely value them so highly that they do start to pay for them. And if they don't? Well, Musk got your $8: "that makes me smart."
Scammers will always tell you that they're not lying to you, merely telling premature truths. Sam Bankman-Fried's defenders will tell you that he didn't actually steal all those billions. He gambled them on a bet that (sorta-kinda) paid off. Eventually, he was able to make all his victims (sorta-kinda) whole, so it's not even a theft:
https://www.cnn.com/2024/05/08/business/ftx-bankruptcy-plan-repay-creditors/index.html
Likewise, Tether, a "stablecoin" that was unable to pass an audit for many years as it issued unbacked, unregulated securities while lying and saying that for every dollar they minted, they had a dollar in reserves. Tether now (maybe) has reserves to equal its outstanding coins, so obviously all those years where they made false claims, they weren't lying, merely telling a premature truth:
https://creators.spotify.com/pod/show/cryptocriticscorner/episodes/Tether-wins–Skeptics-lose-the-end-of-an-era-e2rhf5e
If Tether had failed a margin call during those years and you'd lost everything, well, caveat emptor. The Tether insiders were always insulated from that risk, and that's all that matters: "that makes me smart."
When I think about the next four years, this is how I frame it: the victory of "that makes me smart" over "fairness and truth."
For years, progressives have pointed out the right's hypocrisy, despite that fact that Americans have been conditioned to be so cynical that even the rankest hypocrisy doesn't register. But "caveat emptor?" That isn't just someone else's bad belief or low ethics: it's the way that your life is materially, significantly worsened. The Biden administration – divided between corporate Dems and the Warren/Sanders wing that went to war on "unfair and deceptive" – was ashamed and nearly silent on its groundbreaking work fighting for fairness and honesty. That was a titanic mistake.
Americans may not care about hypocrisy, but they really care about being stolen from. No one wants to be a sucker.
374 notes · View notes
gingersnaptaff · 3 months ago
Text
Welsh Law, Women, and The Mab - Mab March Madness 3
Sorry I have been gone for like what a week? Two? Idk. Anyways, I'm super sorry but I bring a TASTY TREAT TO APOLOGISE.
I'm gonna talk about The Mabinogion, Welsh law, and women's rights because boi howdy is a tasty af text. Quick note: I'm gonna be focusing like on every lady BUT I want you to know that they're all great. Another quick note: I'm not an academic but I am SICK AS FUCK OF SEEING PEOPLE USE ONE FUCKIN SOURCE FOR WELSH DIVORCE AND THINKING THAT EQUATES TO THE WHOLE FUCKIN THING. BAM. DONE. FUCK OFF. READ A GODDAMN BOOK. It's so much more complex and, by God, I'm going to tell u about women's rights, OKAY?!
‘Welsh myth,’ writes Peter Berresford Ellis, ‘is not short on determined women.’ Seriously, the Four Branches give us Arawn's wife (who in her only conversation with her husband gets the upper hand, TWICE!) Rhiannon (resourceful af, a fuckin QUEEN,), Branwen (a dignified figure, SENDS A MESSAGE TO HER BROTHER WITH A STARLING, brokenhearted for the destruction done in her name), Cigfa (owner of the only brain cell within the third branch), Aranrhod (actually needs to kill her brother and I support her), Goewin (what she goes through is horrific and she needs a SWORD), and Blodeuwedd (her whole vibe is IMMACULATE). The Three Romances give us Luned (Best girl, not afraid to give Owain a piece of her mind), Angharad (who could be seen as a thingy for colonialism but also generous if her ‘golden-handed’ epithet is anything to go by), and Enid (one of the Three Splendid Maidens of Arthur's court in the Triads! Eat shit, Geraint!)
Furthermore, you have Gwenhwyfar, who would later ‘get the short end of the stick’ within the Anglo-Norman Christian retelling of Arthuriana. Both Arthur - who had three mistresses in Welsh myth - and Gwen herself were having ‘adulterous intrigues’ in Welsh myth. She, particularly in Geraint ac Enid, is a fascinating look at a queen’s role within the Welsh court.
But lemme focus on the Four Branches real quick! They are, I'd argue, an enmeshment of Welsh Law and Welsh myth, in regards to women. Andrew Breeze says the Mab, ‘reads convincingly’ as being written by a woman. Its main thrust is to do with women and how they're treated by the men and the small but significant ways they break out of their patriarchal cycles.
Now, not every branch has laws in it but what they do have is fascinating. This can be most clearly seen in Branch 1 and Rhiannon's whole affair. It is she who holds the command within the first half of the text after she makes herself known to Pwyll. She is the one who makes the first move, as Breeze writes: ‘the shots are called by the woman not the man.’ It is she who rides past him in her ‘shining, golden garment,’ ‘sitting astride a pale-white horse,’ and imitates the chase that ultimately results in Pwyll chasing after her (and exhausted his horse.) Furthermore, she is presented as being the main instigator of the whole affair for she did not wish to be given to Gwawl ap Clud in marriage. This is true to Welsh texts for, as the Venedotian (North Walian) code states, ‘every woman is to go the way she willeth, freely.’ Try as her father might - and he doesn't thankfully, good ol’ Hyfaidd - he cannot force Rhiannon to marry Gwawl, even if he might try. But all this results in him being whacked in a bag and smacked about. ‘And that was the first time that Badger in the Bag was played,’ so the text proclaims.
Now, this personal bestowal or ‘lladrut’ (stolen, secretive, furtive) wasn't looked down upon as you might think. If it was then why did a literal fuckin princess do it in the 1100's? (*Blows kiss to the sky* for as Geraint H. Jenkins writes, ‘a beautiful princess so terrifyingly androgynous that she was liked by Gerald of Wales to the Queen of the Amazons:’ Gwenllian ferch Gruffudd ap Cynan!) This was just as legally binding as a ‘rod o cenedl’ (gift of kindred) marriage, and all children were accepted.
After that though, it Rhiannon she who is on the back foot and regarded suspiciously by Pwyll's court. Her aforementioned white and gold colours would've let the reader/listener of these tales know that she was Otherworldly, something to be feared as much as admired, and so she is by both the men who counsel her husband, and the women her son’s care is entrusted to.
The primary suspicion is cast upon her after she and Pwyll have been married for three years. ‘The nobleman of the land began to worry at seeing a man whom they loved as much as their lord as foster-brother without an heir, and they summoned him to them.
“Lord,” they said, “we know that you are not as old as some of the men of this land, but we are afraid that you will not get an heir from the wife you have. And because of that, take another wife from whom you may have an heir.”’
Now, The Mab brings up an excellent point that the Laws themselves remain silent on - a woman could be divorced if she did not give her husband an heir. Other reasons for a husband to divorce his wife were ‘dependant on her unchastity either before or after a marriage,’ loose conduct in her marriage so like if she had an affair or smth, or ‘failure to observe the terms of the marriage contract.’ Women too could divorce their wives - which is great, sure! - except that they could only do so ‘on the grounds of impotency, leprosy, or fetid breath,’ as well as if she found him committing adultery but only after the third time. There is an inherent imbalance there as well when you take into account that men could have - and raise! - their bastards without scorn. Notable fucker (as in the sexual sense) Owain Gwynedd is perhaps the shining example of this. Man had many kids! His second wife, Cristina, had to give up her legitimate child she'd had with her first husband before she married Owain, and it doesn't seem like she made efforts to contact him after that.
Rhiannon, too, is then further unjustly punished for her loss of the child. ‘Pwyll punishes her,’ writes Berresford Ellis, ‘by ostracising her’ and as The Mab states: ‘there was a mounting block outside the gate,’ and she was, ‘to sit by that every day and tell the whole story to anyone whom she thought might not know it, and offer to carry guests and strangers on her back to the court if they permitted it.’ Luckily nobody does, but it alludes to the ostracisation women had to deal with if they could not give their husband a child, as well as, perhaps, the punishment applies to a woman if it was discovered after she'd married that she was unchaste before said marriage. ‘The woman's clothes were cut to the level of her hips, she was made to hold the tail, well greased, of a year-old steer, which was thrust through a hole in the house door. Two men prodded the steer, and, if the woman could hold the animal, she could keep it as her agweddi [her dowry that was payable by a husband once a marriage was consummated] and that only if she could not, she had to be content with the grease that clung to her fingers.’
Furthermore, as can be seen in ‘Culhwch ac Olwen,’ if a woman was given in marriage - so if she did not elope herself - then only her father and brothers could do so, with the proviso that it was done so in concert with the other generations of the family. Ysbaddaden Pencawr (big giant lad, Olwen's father, winner of the longest beard award for the nth year running) states: ‘“Her four great-grandmothers and four great-grandsires are alive; it is needful that I should take counsel with them.”’ This lines up with Welsh law, where the son of a Welsh woman given in marriage claimed a ‘mamwys’ if they were given in marriage to an alltud (foreigner) then he claimed it from those who were related to him in four degrees.
You can also see this within the Second Branch. Branwen, ‘a sensitive and intelligent young woman,’ the sister of Bendigeidfran and Manawydan and the half-sister of Nisien and Efnisien, is bestowed in marriage to Matholwch, King of Ireland. Efnisien's whole dealio is rage. He's literally named HOSTILE. When he is not consulted on the matter of his sister’s marriage - ‘“Is that what they have done with such a fine maiden,’ he says in The Mab, ‘and my sister at that, given her away without my permission? They could not have insulted me more.”’ - he flies into a rage and ‘went for [Matholwch's] horses, and cut their lips to the teeth and their ears down to their heads, and their tails to their backs, and where he could get a grip on the eyelids he cut them to the bone.’ This act of violence causes Matholwch to abuse Branwen once the couple returns to Ireland, even though an attempted redress upon the insult has been made through Bendigeidfran bestowing the Cauldron of Rebirth (or Pair Dadeni) to his brother-in-law, and results in Branwen rearing a starling to send a message to her brother Or GET HER OUTTA THERE. The ‘sorrows’ that Branwen subsequently endured are traceable to the unjustified revenge of the ‘quarrelsome’ Efnisien for he, being only her half-brother, was not entitled to consultation in the matter of his half-sister's marriage. As Andrew Breeze writes in his book ‘The Origins of the Four Branches of the Mabinogi,’ the starling escapade ‘shows the narrator’s awareness of how a woman might use literacy to escape from bindsge and male violence.’ It is this letter that brings about her liberation, as well as the fleet from Britain. This stratagem also achieves the redress for Branwen that Bendigeidfran’s attempt. Personally, for me, this shows why divorce would be fuckin pointless for her. She is hidden. She is being abused. She gets given ‘a box on the ear’ every day. Do you think she can escape from that?
Likewise, with Enid ‘a patient’ young woman, she could NOT divorce Geraint. She is the daughter of Ynwyl, ‘chieftain who has fallen on evil days,’ and is fuckin dirt poor. She could keep her gowyn, cowyll, and argyfreu - payments made payable to the woman by the man after they were married - ‘although the Venedotian code deprived her of the latter if the separation were caused by the woman’s own immortality,’ but, unless you were a King's daughter, it wouldn't amount to much. Enid’s predicament within the text is made that much more brutal when you realise it's Gwenhwyfar who has given her and Geraint leave to marry. It is she who is ‘entrusted with the Maiden [Enid]’ once she arrives at court along with Geraint. Arthur is the one to give Enid to Geraint. If she fled then she would be insulting both the King and the Queen.
Furthermore, it is not a divorce within the modern-day sense. The Laws speak of ‘ysgar,’ or separation. A distinction was drawn between separation before or after seven years - for, unlike with say the Normanic church marriage wasn't seen as being for life as such, but was merely a contract that could be broken but only by mutual consent. This distinction only affected the woman's rights the woman had in property. Enid, Branwen, and also Blodeuwedd could only divorce if their husbands agreed to it. Neither Geraint, Matholwch, nor Lleu Llaw Gyffes would want to divorce their wives for 1) he's a dick and would rather she ‘constantly prove her love and loyalty to him,’ 2) she’s Queen of Ireland - although she says herself she wryly says, “though I am no ‘lady'” when she is questioned about the mysterious 'forest on the sea’ - and, chiefly, has given him a son, and 3) Blodeuwedd, 'the most beautiful maiden that anyone had ever seen’ was ‘conjured’ for Lleu. He owns her entirely. For as Sauders Lewis has her say in his play ‘The Woman Made of Flowers: Blodeuwedd,’ ‘I bear Llew's collar.’
Plus, if you don't possess land by yourself - which I think neither lady I've mentioned does, really - then she could not enter into ‘any bargains or surety’ in regards to Sarhâd - blood-price - and so her husband must do it for her. You could say, if you wanted to get really out there, that the war between Wales and Ireland is Branwen's Sarhâd, although that's speculative and I'm hesitant to give it a complete YES.
Now, to THE QUEEN. The laws give an look at what exactly an insult towards the queen would entail - as does the Mab - when Gwenhwyfar is assaulted by a knight in Peredur: ‘And the knight grabbed the goblet from Gwenhwyfar's hand and poured the drink that was in it over her face and breast, and gave Gwenhwyfar a great clout of the ear.’ This punishment echoes Branwen's, as well as the one she receives from Mordred in The Triads where he ‘dragged [her] from her royal chair and struck a blow upon her.’ No surprise, would be seen as Bad Fuckin News.
The Laws are clear on this: striking the Queen was seen as an insult. In Arthuriana, Mordred - or, in some cases, Gwenhwyfach's - striking of Gwenhwyfar leads to the Battle of Camlann. In the Mab, this clout on Branwen’s ear also leads to war. I’d also say you could take this further and suggest that Rhiannon’s treatment - being a horse - is an insult as - I’m assuming - you're not gonna be nice to the woman you're using as a LITERAL STEED. Anyways, like I've mentioned beforehand it was the Queen’s job to take care of the ladies of her court, and, also, fun fact, the amobyr (a fee payable for the maidenhead of the woman), was payable to the queen instead of the King after their daughter married. Not much is said about the queen in regard to her position within the laws, but we have to be grateful for what we do have.
The queen had no political power - except maybe through her personal influence of the King, like, say, Joan, Princess of Wales. This ‘soft power,’ as it were, could be used when you became Queen Dowager, as evidenced by the way Queen Angharad, the wife of Gruffudd ap Cynan used the lands she'd been granted on her becoming queen to aid her wayward third-born son, Cadwaladr - although she had a ‘wide power of protection, a considerable special entourage of servants,’ and possessed certain privileges like ‘the right to circuit the land.’ Furthermore, there was never a ruling queen throughout Welsh history when the Laws of Hywel Dda were in operation, and certainly no Queen Regnant. (Strange considering the laws were drawing on Celtic sources where there were defo women leaders like Boudicca (Buddug) and Cartimandua. ‘This ambivalence of gender,’ writes Alice Roberts in The Celts, ‘[provided] women the possibility to achieve the highest status in society’ so it is curious as to why the later Welsh have dropped this. Surely, on account of that, they would not be opposed to it? However, Barry Cunliffe writes ‘it must be readily admitted that any consideration of Celtic social systems is likely to be biased, not only by the prejudices and preconceptions of the Graeco-Roman sources but by the narrow time span and geographical area over which they range.’ As well as this, ‘women clearly occupied a more significant position in Celtic society than they did in the Graeco-Roman world’ and this can be seen within Welsh law, I'd just caution anybody who thinks it was a noted feminist utopia.
Yet the queen's high status can be evidenced in there being both the ‘transmission of royal dignity through the female’ as well as ‘devolution of land through females,’ thus allowing the matrilineal descent to hold the same reverence as male which was very v handy for Owain Glyndŵr cuz his mam was descended from the house of Gwynedd. As well as that, the Queen had her own privy purse and ‘it was a universal rule - so in ALL codes - that one-third of the income derived from the king went to the queen for her personal use.’ Plus, all officers of the household were ‘under her socially’ and received their linens from her, while the Judge of the Court received his insignia of office, that being a gold ring, from her too once he was invested. Furthermore, she was second to the king in status - including to the Etifedd or Edling, that being the king's first bastard or legitimate son!
(Look, all my essay stuff is interconnected. It's the Marvel Universe of Wales. The Cymru-verse. 🤷)
The ‘dominant role of women within the Mabinogion’ does reflect in some ways the power women had within Welsh society. It is, perhaps, our finest link to showing what rights women had within the time period. Certainly, it's a valuable text in both a feminist sense AND a mythological one. Certainly as Miranda Aldhouse-Green writes in ‘Enchanted Wales’ ‘it is my belief that … some medieval mythic narratives may have drawn inspiration … from Iron Age and Roman Welsh culture.’ This bridging is evident within both mythology and the Laws of Hywel Dda, or Cyfraith Hywel. Whether it be in Pwyll Pen Annwfn, or Peredur, Owain, or The Dream of Macsen Wledig these tales serve as a bridge to both the medieval and the ancient, and, with them, so to do we get a view on Medieval Wales’ attitudes to women.
Women are front and centre across pretty much all eleven - twelve if you count The Tale of Taliesin - tales. As Bendigeidfran says in The Mab, ‘I will be a bridge,’ so too are these vitally important texts. Both they and the laws are heavily Christianised, yes, but their outer trapping of Celticism remains.
You gotta remember these laws were codified by Hywel Dda, but they're drawing on earlier Celtic laws. Hywel Dda was Christian (he wrote the laws in about the mid-tenth century although the earliest manuscripts we have are later, from the 13th, a bit like the Mab!) but - much like whoever the writer of the Mab was, be they an anonymous monk, or, as Andrew Breeze postulates, Gwenllian ap Gruffudd ap Cynan - drew on earlier Celtic sources. Furthermore, Cyfraith Hywel is a bloody wonderful text! Do you know that it has a law relating to intersex people?! No? Well here we are: ‘If a person be born with the members of a man and those of a woman, and it be doubtful of which it may make use; some say, that according to such as it principally may use, its privilege is to rank; but, if it make use of each, the law says, that it is to rank with the highest privilege, and that is the privilege of a man: and, if it should become pregnant, the offspring is to have the patrimony of the man who caused the pregnancy; but, if it should make a woman pregnant, the son is then to obtain its patrimony.’
*Blows a kiss to the sky* For Cyfraith Hywel. There's a reason he's known as ‘The Good.’ There's a reason why The Senedd (Welsh Parliamenus's building that houses the members of the Senedd and their staff is called Tŷ Hywel or Hywel's House. He's a big dealio.
Anyways, Welsh law is great. Read a fuckin book. If anybody makes a half-baked assumption about Welsh law again, I'm killing you and taking all your teeth.
Sources
Peter Berresford Ellis - Celtic Women
Sioned Davies - The Mabinogion
Miranda Aldhouse-Green - Enchanted Wales
Barry Cunliffe - The Celtic World & Ancient Celts (Second Edition)
Andrew Breeze - The Origins of The Four Branches of The Mabinogi
Alice Roberts - The Celts
Thomas Peter Ellis - Welsh Tribal Law (DM for a link if you want it!)
47 notes · View notes
warlocklawyer666 · 4 months ago
Text
Costume Design in Wicked
So, I just watched Wicked for the third time since it’s available on Amazon now and goodness, do I love this movie. There are so many small details that you inly really start appreciating on the second or third watch, so I wanna talk about one of the things that caught my eye, already on the second watch, but even more so on this one. The costume design.
The costume design of this movie is, at least in my opinion, glorious. And by that I mean how certain characters stand out from the crowd through their clothes. If you were to look at a mass of people from Shiz, you’d know directly who the main cast is and who the supporting characters are.
The whole school follows a specific dress code: grey trousers (sometimes with a skirt on the side), light blue shirt and a matte, dark cyan-blue jacket on top. And while these parts get styled differently, trousers exchanged for skirts and similar changes being made, we, the audience, can easily tell by this who background characters are.
If we now take a look at side characters, it is clear that, while they are in fact similar to background characters, clothes, they are still distinctive differences. G(a)linda’s friend Pfannee wears, instead of the usual matte jacket, a (presumably) velvet one, giving him a shinier look without pulling too much of the audiences attention to it. Plus he wears extremely fancy glasses with a very intricate design, much more notable here is their shape tho, which is rectangular, something that I didn’t spot on any other character at Shiz, even tho there are at least three background characters who also wear glasses, all of which are round.
Glinda’s other friend, Shenshen, has a uniform that is exclusively grey, except for a few pink stripes.
Glinda is often around those two and Shenshen, lacking a lot pf colour in her uniform let’s Glinda pop out, while she at the same time shows how she belongs to her squad through the pink stripes, something that Pfannee does, in my opinion, too, even if he does it in a slightly different way and by being a bit more flashy, just like Glinda.
Nessa, Elphaba’s sister, also has a few differences in her daily attire in comparison to that of her classmates, the most prominent being her wearing a dress during the ‘Dancing through Life’ scene at Shiz, as well as the clothes she arrives in. What is interesting here is how her jacket is the only one that is entirely closed, this could either be because her overprotective father didn’t want her to catch a cold, or could also symbolise how she doesn’t require any help and refuses that, closing her off, instead of being open and comfortable, a change which we can clearly see later in the movie, during the scenes where she wears a dress. There she seems much more comfortable and open with her clothes being in turn more open, while no one tries to constantly help her and looks excessively after her wellbeing.
Let’s move on to Prince Fiyero, most of the time he wears, what I think is a dark royal blue, which would be a nod to his heritage, on top of that he is also, far as I could tell, the only student who has golden ornaments as part of his general attire. Even in his actual school outfit which he wears during the lion cub scene, we can see a clear distinction from other students through his light beige trousers and short which has a lighter blue colour than those of the other students and matches with the shirt colour Elphaba wears in the same scene, showing their connection and the bonding that happens there. On another note, his usual dark blue clothing neither matches specifically Elphaba nor Glinda, however the blue colour is in fact roughly in a triadic colour scheme with Elphaba’s skin colour and, if lighter, also with Glinda’s overall pink wardrobe.
Finally onto Elphaba and Glinda. The colours of their clothes being black and pink and, except for a few accents in other designs, being specific to them. When looking at a crowd from Shiz, those are the two that you’d notice first. They never wear actual school uniform and are as distinctive from the rest of the school as light and dark.
That is all I can think of so far, but if I missed something or got something wrong, please tell me
Anyway, thanks for listening to my ted talk ^^
29 notes · View notes
jcmarchi · 1 month ago
Text
Why Language Models Get ‘Lost’ in Conversation
New Post has been published on https://thedigitalinsider.com/why-language-models-get-lost-in-conversation/
Why Language Models Get ‘Lost’ in Conversation
A new paper from Microsoft Research and Salesforce finds that even the most capable Large Language Models (LLMs) fall apart when instructions are given in stages rather than all at once. The authors found that performance drops by an average of 39 percent across six tasks when a prompt is split over multiple turns:
A single turn conversation (left) obtains the best results, but is unnatural for the end-user. A multi-turn conversation (right) finds even the highest-ranked and most performant LLMs losing the effective impetus in a conversation. Source: https://arxiv.org/pdf/2505.06120
More strikingly, the reliability of responses takes a nosedive, with prestigious models such as ChatGPT-4.1 and Gemini 2.5 Pro swinging between near-perfect answers and manifest failures, depending on how the same task is phrased; further, output consistency can drop by more than half in the process.
To explore this behavior, the paper introduces a method called sharding*, which splits fully-specified prompts into smaller fragments and releases them one at a time into a conversation.
In the most basic terms, this is equivalent to giving a cohesive and comprehensive single order at a restaurant, leaving the waiter with nothing to do but acknowledge the request; or else deciding to attack the matter collaboratively:
Two extreme versions of a restaurant conversation (not from the new paper, for illustrative purposes only).
For emphasis, the example above perhaps puts the customer in a negative light. But the core idea depicted in the second column is that of a transactional exchange that clarifies a problem-set, prior to addressing the problems – apparently a rational and reasonable way of approaching a task.
This setup is reflected in the new work’s drip-fed, sharded approach to LLM interaction. The authors note that LLMs often generate overly long responses and then continue to rely on their own insights even after those insights have been shown to be incorrect, or irrelevant. This tendency, combined with other factors, can cause the system to lose track of the exchange entirely.
In fact, the researchers note what many of us have found anecdotally – that the best way to get the conversation back on track is to start a new conversation with the LLM.
‘If a conversation with an LLM did not lead to expected outcomes, starting a new conversation that repeats the same information might yield significantly better outcomes than continuing an ongoing conversation.
‘This is because current LLMs can get lost in the conversation, and our experiments show that persisting in a conversation with the model is ineffective. In addition, since LLMs generate text with randomness, a new conversation may lead to improved outcomes.’
The authors acknowledge that agentic systems such as Autogen or LangChain can potentially improve the outcomes by acting as interpretative layers between the end-user and the LLM, only communicating with the LLM when they have gathered enough ‘sharded’ responses to coagulate into a single cohesive query (which the end-user will not be exposed to).
However, the authors contend that a separate abstraction layer should not be necessary, or else be built directly into the source LLM:
‘An argument could be made that multi-turn capabilities are not a necessary feature of LLMs, as it can be offloaded to the agent framework. In other words, do we need native multi-turn support in LLMs when an agent framework can orchestrate interactions with users and leverage LLMs only as single-turn operators?…’
But having tested the proposition across their array of examples, they conclude:
‘[Relying] on an agent-like framework to process information might be limiting, and we argue LLMs should natively support multi-turn interaction’
This interesting new paper is titled LLMs Get Lost In Multi-Turn Conversation, and comes from four researchers across MS Research and Salesforce,
Fragmented Conversations
The new method first breaks down conventional single-turn instructions into smaller shards, designed to be introduced at key moments during an LLM interaction, a structure that reflects the exploratory, back-and-forth style of engagement seen in systems such as ChatGPT or Google Gemini.
Each original instruction is a single, self-contained prompt that delivers the entire task in one go, combining a high-level question, supporting context, and any relevant conditions. The sharded version breaks this into multiple smaller parts, with each shard adding just one piece of information:
Paired instructions showing (a) a complete prompt delivered in a single turn and (b) its sharded version used to simulate an underspecified, multi-turn interaction. Semantically, each version delivers the same informational payload.
The first shard always introduces the main goal of the task, while the rest provide clarifying details. Together, they deliver the same content as the original prompt, but spread out naturally over several turns in the conversation.
Each simulated conversation unfolds between three components: the assistant, the model under evaluation; the user, a simulated agent with access to the full instruction in sharded form; and the system, which invigilates and scores the exchange.
The conversation begins with the user revealing the first shard  and the assistant replying freely. The system then classifies that response into one of several categories, such as a clarification request or a full answer attempt.
If the model does attempt an answer, a separate component extracts just the relevant span for evaluation, ignoring any surrounding text. On each new turn, the user reveals one additional shard, prompting another response. The exchange continues until either the model gets the answer right or there are no shards left to reveal:
Diagram of a sharded conversation simulation, with the evaluated model highlighted in red.
Early tests showed that models often asked about information that hadn’t been shared yet, so the authors dropped the idea of revealing shards in a fixed order. Instead, a simulator was used to decide which shard to reveal next, based on how the conversation was going.
The user simulator, implemented using GPT-4o-mini, was therefore given full access to both the entire instruction and the conversation history, tasked with deciding, at each turn, which shard to reveal next, based on how the exchange was unfolding.
The user simulator also rephrased each shard to maintain conversational flow, without altering the meaning. This allowed the simulation to reflect the ‘give-and-take’ of real dialogue, while preserving control over the task structure.
Before the conversation begins, the assistant is given only the basic information needed to complete the task, such as a database schema or an API reference. It is not told that the instructions will be broken up, and it is not guided toward any specific way of handling the conversation. This is done on purpose: in real-world use, models are almost never told that a prompt will be incomplete or updated over time, and leaving out this context helps the simulation reflect how the model behaves in a more realistic context.
GPT-4o-mini was also used to decide how the model’s replies should be classified, and to pull out any final answers from those replies. This helped the simulation stay flexible, but did introduce occasional mistakes: however, after checking several hundred conversations by hand, the authors found that fewer than five percent had any problems, and fewer than two percent showed a change in outcome because of them, and they considered this a low enough error rate within the parameters of the project.
Simulation Scenarios
The authors used five types of simulation to test model behavior under different conditions, each a variation on how and when parts of the instruction are revealed.
In the Full setting, the model receives the entire instruction in a single turn. This represents the standard benchmark format and serves as the performance baseline.
The Sharded setting breaks the instruction into multiple pieces and delivers them one at a time, simulating a more realistic, underspecified conversation. This is the main setting used to test how well models handle multi-turn input.
In the Concat setting, the shards are stitched back together as a single list, preserving their wording but removing the turn-by-turn structure. This helps isolate the effects of conversational fragmentation from rephrasing or content loss.
The Recap setting runs like Sharded, but adds a final turn where all previous shards are restated before the model gives a final answer. This tests whether a summary prompt can help recover lost context.
Finally, Snowball goes further, by repeating all prior shards on every turn, keeping the full instruction visible as the conversation unfolds – and offering a more forgiving test of multi-turn ability.
Simulation types based on sharded instructions. A fully-specified prompt is split into smaller parts, which can then be used to simulate either single-turn (Full, Concat) or multi-turn (Sharded, Recap, Snowball) conversations, depending on how quickly the information is revealed.
Tasks and Metrics
Six generation tasks were chosen to cover both programming and natural language domains: code generation prompts were taken from HumanEval and LiveCodeBench; Text-to-SQL queries were sourced from Spider; API calls were constructed using data from the Berkeley Function Calling Leaderboard; elementary math problems were provided by GSM8K; tabular captioning tasks were based on ToTTo; and Multi-document summaries were drawn from the Summary of a Haystack dataset.
Model performance was measured using three core metrics: average performance, aptitude, and unreliability.
Average performance captured how well a model did overall across multiple attempts; aptitude reflected the best results a model could reach, based on its top-scoring outputs; and unreliability measured how much those results varied, with larger gaps between best and worst outcomes indicating less stable behavior.
All scores were placed on a 0-100 scale to ensure consistency across tasks, and metrics computed for each instruction – and then averaged to provide an overall picture of model performance.
Six sharded tasks used in the experiments, covering both programming and natural language generation. Each task is shown with a fully-specified instruction and its sharded version. Between 90 and 120 instructions were adapted from established benchmarks for each task.
Contenders and Tests
In the initial simulations (with an estimated cost of $5000), 600 instructions spanning six tasks were sharded and used to simulate three conversation types: full, concat, and sharded. For each combination of model, instruction, and simulation type, ten conversations were run, producing over 200,000 simulations in total – a schema that made it possible to capture both overall performance and deeper measures of aptitude and reliability.
Fifteen models were tested, spanning a wide range of providers and architectures: the OpenAI models GPT-4o (version 2024-11-20), GPT-4o-mini (2024-07-18), GPT-4.1 (2025-04-14), and the thinking model o3 (2025-04-16).
Anthropic models were Claude 3 Haiku (2024-03-07) and Claude 3.7 Sonnet (2025-02-19), accessed via Amazon Bedrock.
Google contributed Gemini 2.5 Flash (preview-04-17) and Gemini 2.5 Pro (preview-03-25). Meta models were Llama 3.1-8B-Instruct and Llama 3.3-70B-Instruct, as well as Llama 4 Scout-17B-16E, via Together AI.
The other entries were OLMo 2 13B, Phi-4, and Command-A, all accessed locally via Ollama or Cohere API; and Deepseek-R1, accessed through Amazon Bedrock.
For the two ‘thinking’ models (o3 and R1), token limits were raised to 10,000 to accommodate longer reasoning chains:
Average performance scores for each model across six tasks: code, database, actions, data-to-text, math, and summary. Results are shown for three simulation types: full, concat, and sharded. Models are ordered by their average full-setting score. Shading reflects the degree of performance drop from the full setting, with the final two columns reporting average declines for concat and sharded relative to full.
Regarding these results, the authors state†:
‘At a high level, every model sees its performance degrade on every task when comparing FULL and SHARDED performance, with an average degradation of -39%. We name this phenomenon Lost in Conversation: models that achieve stellar (90%+) performance in the lab-like setting of fully-specified, single-turn conversation struggle on the exact same tasks in a more realistic setting when the conversation is underspecified and multi-turn.’
Concat scores averaged 95 percent of full, indicating that the performance drop in the sharded setting cannot be explained by information loss. Smaller models such as Llama3.1-8B-Instruct, OLMo-2-13B, and Claude 3 Haiku showed more pronounced degradation under concat, suggesting that smaller models are generally less robust to rephrasing than larger ones.
The authors observe†:
‘Surprisingly, more performant models (Claude 3.7 Sonnet, Gemini 2.5, GPT-4.1) get equally lost in conversation compared to smaller models (Llama3.1-8B-Instruct, Phi-4), with average degradations of 30-40%. This is in part due to metric definitions. Since smaller models achieve lower absolute scores in FULL, they have less scope for degradation than the better models.
‘In short, no matter how strong an LLM’s single-turn performance is, we observe large performance degradations in the multi-turn setting.’
The initial test indicates that some models held up better in specific tasks: Command-A on Actions, Claude 3.7 Sonnet, and GPT-4.1 on code; and Gemini 2.5 Pro on Data-to-Text, indicating that multi-turn ability varies by domain. Reasoning models such as o3 and Deepseek-R1 fared no better overall, perhaps because their longer replies introduced more assumptions, which tended to confuse the conversation.
Reliability
The relationship between aptitude and reliability, clear in single-turn simulations, appeared to fall apart under multi-turn conditions. While aptitude declined only modestly, unreliability doubled on average. Models that were stable in full-format prompts, such as GPT-4.1 and Gemini 2.5 Pro, became just as erratic as weaker models like Llama3.1-8B-Instruct or OLMo-2-13B once the instruction was fragmented.
Overview of aptitude and unreliability as shown in a box plot (a), followed by reliability outcomes from experiments with fifteen models (b), and results from the gradual sharding test where instructions were split into one to eight shards (c).
Model responses often varied by as much as 50 points on the same task, even when nothing new was added, suggesting that the drop in performance was not due to a lack of skill, but to the model becoming increasingly unstable across turns.
The paper states†:
‘[Though] better models tend to have slightly higher multi-turn aptitude, all models tend to have similar levels of unreliability. In other words, in multi-turn, underspecified settings, all models we test exhibit very high unreliability, with performance degrading 50 percent points on average between the best and worst simulated run for a fixed instruction.’
To test whether performance degradation was tied to the number of turns, the authors ran a gradual sharding experiment, splitting each instruction into one to eight shards (see right-most column in image above).
As the number of shards increased, unreliability rose steadily, confirming that even minor increases in turn count made models more unstable. Aptitude remained mostly unchanged, reinforcing that the issue lies in consistency, not capability.
Temperature Control
A separate set of experiments tested whether unreliability was simply a byproduct of randomness. To do this, the authors varied the temperature setting of both the assistant and the user simulator across three values: 1.0, 0.5, and 0.0.
In single-turn formats like full and concat, reducing the assistant’s temperature significantly improved reliability, cutting variation by as much as 80 percent; but in the sharded setting, the same intervention had little effect:
Unreliability scores for different combinations of assistant and user temperature across full, concat, and sharded settings, with lower values indicating greater response consistency.
Even when both the assistant and the user were set to zero temperature, unreliability remained high, with GPT-4o showing variation around 30 percent, suggesting that the instability seen in multi-turn conversations is not just stochastic noise, but a structural weakness in how models handle fragmented input.
Implications
The authors write of the implications of their findings at unusual length at the paper’s conclusion, arguing that strong single-turn performance does not guarantee multi-turn reliability, and cautioning against over-relying on fully-specified benchmarks when evaluating real-world readiness (since such benchmarks mask instability in more natural, fragmented interactions).
They also suggest that unreliability is not just a sampling artifact, but a fundamental limitation in how current models process evolving input, and they suggest that this raises concerns for agent frameworks, which depend on sustained reasoning across turns.
Finally, they argue that multi-turn ability should be treated as a core capability of LLMs, not something offloaded to external systems.
The authors note that their results likely underestimate the true scale of the problem, and draw attention to the ideal conditions of the test: the user simulator in their setup had full access to the instruction and could reveal shards in an optimal order, which gave the assistant an unrealistically favorable context (in real-world use, users often supply fragmented or ambiguous prompts without knowing what the model needs to hear next).
Additionally, the assistant was evaluated immediately after each turn, before the full conversation unfolded, preventing later confusion or self-contradiction from being penalized, which would otherwise worsen performance. These choices, while necessary for experimental control, mean that the reliability gaps observed in practice are likely to be even greater than those reported.
They conclude:
‘[We] believe conducted simulations represent a benign testing ground for LLM multi-turn capabilities. Because of the overly simplified conditions of simulation, we believe the degradation observed in experiments is most likely an underestimate of LLM unreliability, and how frequently LLMs get lost in conversation in real-world settings.‘
Conclusion
Anyone who has spent a significant amount of time with an LLM will likely recognize the issues formulated here, from practical experience; and most of us, I imagine, have intuitively abandoned ‘lost’ LLM conversations for fresh ones, in the hope that the LLM may ‘start over’ and cease to obsess about material that came up in a long, winding and increasingly infuriating exchange.
It’s interesting to note that throwing more context at the problem may not necessarily solve it; and indeed, to observe that the paper raises more questions than it provides answers (except in terms of ways to skip around the problem).
* Confusingly, this is unrelated to the conventional meaning of ‘sharding’ in AI.
† Authors’ own bold emphases.
First published Monday, May 12, 2025
0 notes