#AI-based answer assessment
Explore tagged Tumblr posts
desklibai · 1 month ago
Text
Unlocking the Potential of AI in Education: A New Era of Learning and Assessment
In the dynamic world of education, the integration of technology has become a game-changer, transforming the way we approach learning, teaching, and assessment. One of the most exciting advancements in this realm is the rise of AI-powered tools designed to enhance educational experiences. Desklib's AI Answer platform stands at the forefront of this revolution, offering a comprehensive solution that brings together top AI models in one place, making it both instant and affordable. This platform is not just a tool; it's a gateway to a new era of education where AI grader, AI essay grading, and AI academic feedback become integral parts of the learning process.
Tumblr media
The Transformative Power of AI in Education
The traditional educational landscape has long relied on manual grading and feedback, which can be time-consuming and subject to human error. The introduction of AI grader and AI essay grading tools has the potential to revolutionize this process. These tools leverage sophisticated algorithms to evaluate student work, providing detailed and objective feedback in real-time. By automating the grading process, educators can focus more on teaching and less on administrative tasks, ultimately enhancing the overall educational experience.
Personalized Learning with AI Writing Assistants
One of the most significant benefits of AI in education is the ability to provide personalized learning experiences. AI writing assistants are a prime example of this, offering real-time feedback on grammar, style, and content. These tools go beyond simple spell-checking to provide constructive criticism that helps students refine their writing skills. Whether it's a high school essay or a college thesis, an AI writing assistant can suggest improvements in sentence structure, vocabulary usage, and overall coherence. This personalized approach not only helps students understand their mistakes but also empowers them to take ownership of their learning journey.
Enhancing Learning with AI Academic Feedback
Every student learns differently, and AI academic feedback tools recognize this by offering personalized insights. Unlike traditional grading methods, which often provide a one-size-fits-all approach, AI can tailor its feedback to the individual strengths and weaknesses of each student. This personalized approach ensures that students receive comprehensive feedback on their thought processes and reasoning, not just their final answers. By providing detailed and objective feedback, AI tools help students understand their mistakes and make meaningful improvements.
The Role of AI in Answer Evaluation
When it comes to AI-powered answer evaluation, the benefits are manifold. AI models can handle a wide range of question types, from multiple-choice to open-ended responses. They can even evaluate complex subjects like literature, history, and science, providing nuanced feedback that goes beyond mere correctness. This holistic approach to grading ensures that students receive comprehensive feedback on their thought processes and reasoning, not just their final answers. By automating the grading process, educators can save valuable time, allowing them to focus more on teaching and less on administrative tasks.
Simplifying Assessment with AI-Based Answer Assessment
The traditional grading process can be time-consuming and subjective. AI-based answer assessment tools eliminate these challenges by providing consistent, objective evaluations. These tools can process large volumes of data quickly and accurately, ensuring that every student's work is assessed fairly and efficiently. This not only saves time for educators but also provides students with timely feedback, allowing them to make improvements while the material is still fresh in their minds.
The Desklib AI Answer Platform: A Comprehensive Solution
At the heart of this educational revolution is Desklib's AI Answer platform. This innovative tool brings together top AI models like ChatGPT 4o, Google Gemini Pro, Claude 3.5 Sonnet, Mistral Large 2, and Llama 3.1 405b, all accessible through a unified interface. With AI Answer, users can easily switch between models, compare responses, and find the best answer to their queries.
A User-Friendly Experience
Desklib's AI Answer is designed with the user in mind. It supports various input types, including text, files, and images, making it versatile for different types of assignments. Whether you're uploading a research paper or submitting an image for analysis, the platform ensures a seamless experience. Additionally, users can crop images before submission, ensuring that only the relevant parts are evaluated.
Ensuring Privacy and Security
In an age where data privacy is paramount, Desklib's AI Answer platform takes user privacy seriously. All interactions are secured, and user data and queries are kept private. This ensures that students and educators can use the platform without worrying about their information being shared with third parties.
Accessible for All
Desklib's AI Answer is accessible to everyone, with options for both registered and unregistered users. While unregistered users can access basic models like GPT-4o-mini with a daily limit of 2 questions, registered users enjoy expanded access, including up to 10 questions per day. For those who subscribe, the platform offers full access to all AI models, support for image and file uploads, and the ability to ask follow-up questions.
Interactive and Informative
One of the standout features of Desklib's AI Answer is its interactivity. Subscribed users can engage in follow-up questions, allowing for a deeper exploration of topics. This feature is particularly useful for complex subjects where a single answer may not be sufficient. The platform also provides alerts and warnings if users exceed their usage limits, ensuring a smooth and uninterrupted experience.
Tracking Progress
For educators and students alike, tracking progress is crucial. Desklib's AI Answer allows users to view their past conversations and questions, providing a valuable resource for reviewing previous work and understanding areas for improvement. This feature not only helps students stay organized but also allows educators to monitor progress over time.
The Future of Education
As we look to the future, the integration of AI in education holds immense potential. Tools like Desklib's AI Answer are not just changing the way we grade and provide feedback; they are transforming the entire educational experience. By offering personalized, immediate feedback and streamlining the grading process, these technologies are empowering both educators and students to achieve more.
In conclusion, the advent of AI grader, AI essay grading, and AI answer checker tools marks a new era in education. Desklib's AI Answer platform stands out as a leader in this field, providing a user-friendly, interactive, and informative tool that brings together the best AI models in one place. Whether you're an educator looking to save time or a student seeking personalized feedback, Desklib's AI Answer is your go-to solution for instant and affordable AI-powered education.
Visit Desklib's AI Answer today and experience the future of education for yourself!
0 notes
weaselandfriends · 3 months ago
Text
Ender's Game (novel)
Tumblr media
Is Ender Wiggin (pictured above as the little brother from Malcolm in the Middle) guilty of xenocide?
Actually, let's first answer a different, but related, question:
What game does the title "Ender's Game" refer to?
It's not as simple a question as it seems. There are three games that have a prominent role in the plot, all very different from one another.
The obvious answer is the Battle School zero-gravity game, where teams of competitors play glorified laser tag in a big empty cube. In terms of page count, most of the book is dedicated to this game. It's also the game depicted on the cover of the edition above.
Yet this game vanishes during the story's climax, when Ender is given a new game to play, a game he is told is a simulator of spaceship warfare. This "game" turns out to not be a game at all, though; after annihilating the alien homeworld in the final stage, Ender learns that he was actually commanding real ships against real enemies the whole time, and that he just singlehandedly ended the Human-Bugger war forever via total xenocide of the aliens. This is both the final game and the most consequential to the plot, despite the short amount of time it appears.
There's also a third game, a single-player video game Ender plays throughout the story. The game is procedurally generated by an AI to respond to the player's emotional state, and is used as a psychiatric diagnostic for the players. Of the three games, this is the one that probes deepest into Ender's psyche, that most defines him as a person; it's also the final image of the story, as the aliens build a facsimile of its world in reality after psychically reading Ender's mind while he xenocides them.
Because all three games are important, the easiest answer might be that the question doesn't matter, that the story is called Ender's Game not to propose this question at all but simply because the technically more accurate "Ender's Games" would improperly suggest a story about a serial prankster.
Fine. But why does the title use the possessive "Ender's" at all?
He does not own any of these games. He did not create them. He does not facilitate them. All of these games, even the simulator game, predate his use of them as a player, were not designed with him in mind, were intended to train and assess potential commanders for, ostensibly, the hundred years since the last Human-Bugger war.
It's in this question that we get to the crux of what defines Gamer literature.
These games are Ender's games because he dominates them into being about him. He enters a rigidly-defined, rules-based system, and excels so completely that the games warp around his presence. In the Battle School game, the administrators stack the odds against Ender, thereby rendering every other player's presence in the game irrelevant except in their function as challenges for Ender to overcome. The administrators acknowledge this in an argument among themselves:
"The game will be compromised. The comparative standings will become meaningless." [...] "You're getting too close to the game, Anderson. You're forgetting that it is merely a training exercise." "It's also status, identity, purpose, name; all that makes these children who they are comes out of this game. When it becomes known that the game can be manipulated, weighted, cheated, it will undo this whole school. I'm not exaggerating." "I know." "So I hope Ender Wiggin truly is the one, because you'll have degraded the effectiveness of our training method for a long time to come."
In this argument, Anderson views the game the way games have been viewed since antiquity: exercises in acquiring honor and status. This honor is based on the innate fairness inherent to games as rule-based systems, which is why in ancient depictions of sport the chief character is often not a competitor but the host, who acts as referee. In Virgil's Aeneid, for instance, the hero Aeneas hosts a series of funeral games (the games themselves intended as an honor for his dead father). Despite being the principal character of the epic, Aeneas does not compete in these games. Instead, he doles out prizes to each competitor based on the worthiness they display; his fairness marks him symbolically as a wise ruler. The Arthurian tournament is another example, where Arthur as host is the principal character, and the knights (Lancelot, Tristan, etc.) who compete do so primarily to receive honors from him or his queen.
In Ender's Game, it is the antagonistic figure Bonzo Madrid who embodies this classical concept of honor; the word defines him, is repeated constantly ("his Spanish honor"), drives his blistering hatred of Ender, who receives both unfair boons and unfair banes from the game's administrators, who skirts the rules of what is allowed to secure victory. Bonzo is depicted as a stupid, bull-like figure; his honor is ultimately worthless, trivially manipulated by Ender in their final fight.
Meanwhile, it's Ender's disregard for honor, his focus solely on his namesake -- ending, finishing the game, the ends before the means -- that makes him so valuable within the scope of the story. He is "the one," as Anderson puts it, the solipsistically important Gamer, the Only I Play the Game-r, because the game now matters in and of itself, rather than as a social activity. In the Aeneid and in Arthur, the competitors are soldiers, for whom there is a world outside the game. Their games are not a substitute for war but a reprieve from it, and as such they are an activity meant to hold together the unifying fabric of society. The values Anderson espouses (status, identity, purpose, name) are fundamentally more important in this social framework than winning (ending) is.
Ender's game, as the Goosebumps-style blurb on my 20-year-old book fair edition's cover proclaims, is not just a game anymore. Its competitors are also soldiers, but the game is meant to prepare them for war; the spaceship video game is actual war. And as this is a war for the survival of the human race, as Ender is told, there is no need for honor. The othered enemy must be annihilated, without remorse or mercy.
This ethos of the game as fundamentally important for its own sake pervades Gamer literature beyond Ender's Game. In Sword Art Online (which I wrote an essay on here), dying in the game is dying in real life, and as such, only Kirito's ability to beat the game matters. Like Ender, Kirito is immediately disdained by his fellow players as a "cheater" (oh sorry, I mean a "beater") because he possesses inherent advantages due to being a beta player. In an actual game, a game that is only a game, Kirito's cheat powers would render the game pointless. What purpose does Kirito winning serve if he does it with Dual Wielding, an overpowered skill that only he is allowed to have? But when a game has real stakes, when only ability to win matters, it is possible to disregard fairness and see the cheater as heroic.
This notion of the "cheat power," a unique and overpowered ability only the protagonist has, is pervasive in post-SAO Gamer literature. To those for whom games are simply games, such powers can only be infuriating and obnoxious betrayals of the purpose of games; to those for whom games mean more than just games, for whom games have a primacy of importance, these powers are all that matter.
That's the core conceit of Gamer literature: the idea that the Game is life, that winning is, in fact, everything.
What sets Ender's Game apart from Sword Art Online is that it creates the inverted world where the Game matters above all, but then draws back the curtain to reveal the inversion. The Buggers are, in fact, no longer hostile. They are not planning to invade Earth again, as Ender has been told his entire life. The war, for them, is entirely defensive, and Ender is the aggressor. And due to Ender's singleminded focus on Ending, on winning, on disregarding honor and fairness, he ultimately commits the xenocide, erases an entire sentient species from existence. He wins a game he should never have been playing.
The obvious counterargument, the one I imagine everyone who has read this book thought up the moment I posed the question at the beginning of this essay, is that Ender did not know he was committing xenocide. The fact that the combat simulator game was not a game was withheld from him until afterward. Plus, he was a child.
Salient arguments all. Ones the book itself makes, via Ender's commander, Graff, to absolve him of sin at the end. They're probably even correct, in a legal sense (I'm not a legal scholar, don't quote me), and in a moral sense. In real life, it would be difficult to blame a 10-year-old in those circumstances for what he did. But in the thematic framework of Ender's Game the book, these arguments are completely inadequate.
Ender has been playing a fourth game the entire story. And this is the only game he doesn't win.
A game is defined by its system of control and limitation over the behavior of the players. A game has rules. His whole life, Ender has been playing within the rules of the system of control his military commanders place upon him.
Their control extends even before he was born; as a third child in a draconian two-child-only world, his existence is at the behest of the government. Graff confirms this to Ender's parents when he recruits him to Battle School: "Of course we already have your consent, granted in writing at the time conception was confirmed, or he could not have been born. He has been ours since then, if he qualified." Graff frames this control utterly, in terms of possession: "he has been ours." He does not exaggerate. Since Ender was young, he has had a "monitor" implanted in his body so the army could observe him at all times, assess whether he "qualifies"; even the brief moment the monitor is removed is a test. "The final step in your testing was to see what would happen when the monitor came off," Graff explains after Ender passes the test by murdering a 6-year-old. Conditions are set up for Ender, similar to the unfair challenges established in the Battle School game; he is isolated from his peers, denied practice sessions, held in solitary confinement on a remote planetoid. It's all in service of assessing Ender as "the one."
Ender wins this game in the sense that he does, ultimately, become "the one" -- the one Graff and the other military men want, the xenocider of the Buggers. He fails this game in the sense that he does not break it.
The other three games Ender plays, he breaks. Usually by cheating. In the single-player psychiatry game, when presented with a deliberately impossible challenge where a giant gives him two glasses to pick between, Ender cheats and kills the giant. "Cheater, cheater!" the dying giant shouts. In the Battle School game, Ender is ultimately confronted by insurmountable odds: 2 armies against his 1. He cannot outgun his opponent, so he cheats by using most of his troops as a distraction so five soldiers can sneak through the enemy's gate, ending the game. At the school, going through the gate is traditionally seen as a mere formality, something done ceremonially once the enemy team is wiped out (there's that honor again, that ceremony), but it technically causes a win. Even Anderson, the game's administrator, sees this as a breach of the rules when Ender confronts him afterward.
Ender was smiling. "I beat you again, sir," he said. "Nonsense, Ender," Anderson said softly. "Your battle was with Griffin and Tiger." "How stupid do you think I am?" Ender said. Loudly, Anderson said, "After that little maneuver, the rules are being revised to require that all of the enemy's soldiers must be frozen or disabled before the gate can be reversed."
(I include the first part of that quote to indicate that Ender all along knows who he is really playing this game against -- the administrators, the military men who control every facet of his life.)
Ender beats the war simulator game in a similar fashion. Outnumbered this time 1000-to-1, he uses his soldiers as sacrifices to sneak a single bomb onto the alien's homeworld, destroying it and committing his xenocide. Ender himself sees this maneuver as breaking the rules, and in fact falsely believes that if he breaks the rules he will be disqualified, set free from the fourth game: "If I break this rule, they'll never let me be a commander. It would be too dangerous. I'll never have to play a game again. And that is victory." The flaw in his logic comes not from whether he's breaking the rules of the game, but which game he is breaking the rules of. It's not the fourth game, Ender's game, but the war simulator game, simply a sub-game within the confines of the fourth game, a sub-game the fourth game's administrators want him to break, a sub-game that gives Ender the illusion of control by breaking. When Ender tells his administrators about his plan, the response he receives almost taunts him to do it:
"Does the Little Doctor work against a planet?" Mazer's face went rigid. "Ender, the buggers never deliberately attacked a civilian population in either invasion. You decide whether it would be wise to adopt a strategy that would invite reprisals."
(And if it wasn't clear how much the administrators wanted him to do this all along, the moment he does it, they flood the room with cheers.)
Ender wins his games by cheating -- by fighting the rules of the game itself -- and yet he never cheats at the fourth game, the game of his life.
In this fourth game, he always plays by the rules.
In the inverted world of Gamer lit, where games define everything, including life and death, it's a common, even natural progression for the Gamer to finally confront the game's administrator. Sword Art Online ends when Kirito defeats Akihiko Kayaba, the developer. In doing so, Kirito exceeds the confines of the game, not simply by ignoring its rules and coming back to life after he's killed, but by demonstrating mastery against the game's God. Afterward, Sword Art Online truly becomes Kirito's Game, with nobody else able to lay claim to the possessive. Kirito demonstrates this control at the end of the anime by recreating Sword Art Online's world using its source code, completing the transition into a player-administrator.
(Though I wonder, how much of a class reading could one give to this new brand of Gamer lit? If classical games were told from the perspective of the one who controlled them, then is there not something innately anti-establishment in Kirito overcoming the controller? This is the gist of many other death game stories, like The Hunger Games, though none of them may be the most sophisticated takes on the subject, more empty fantasy than anything else.)
Ender never fights or defeats his administrators. He never even tries, other than rare periods of depressive inactivity. He doesn't try even though the option is proposed to him by Dink Meeker, an older student whom Ender respects:
"I'm not going to let the bastards run me, Ender. They've got you pegged, too, and they don't plan to treat you kindly. Look what they've done to you so far." "They haven't done anything except promote me." "And she make you life so easy, neh?" Ender laughed and shook his head. "So maybe you're right." "They think they got you on ice. Don't let them." "But that's what I came for," Ender said. "For them to make me into a tool."
Instead, Ender finds comfort in the control exerted on his life. When sent to Earth on leave, he seeks out a lake that reminds him of living in Battle School.
"I spend a lot of time on the water. When I'm swimming, it's like being weightless. I miss being weightless. Also, when I'm here on the lake, the land slopes up in every direction." "Like living in a bowl." "I've lived in a bowl for four years."
Because of this, Ender never cheats against Graff. He could; Graff states several times that Ender is smarter than him, and the fact that they have Ender fighting the war instead of Graff is proof he believes it. But Ender never considers it. He never considers gaming the system of his life.
If Gamer literature emphasizes the inversion of the world order, where games supersede reality in importance (and, as in Sword Art Online, only through this inverted order is one able to claim real power by being a Gamer), then Ender's Game acknowledges both sides of the inversion. For Ender, the games he plays are not simply games anymore. The psychology game, the Battle School game, the war simulator game; all of these he must win at all costs, even if it requires disrespecting the foundational purpose of these games. But his real life? Ender wants that to be a game, craves it to be a game, can't live unless the walls slope up around him like a bowl, can't stand it unless there is a system of control around him. He does what Graff tells him, even though he recognizes immediately that Graff is not his friend, that Graff is the one isolating him from others, rigging things against him. He does what Graff tells him all the way up to and including xenocide, because Ender cannot tell game from real life. That's the core deception at the end: Ender is playing a game that's actually real and he doesn't know it -- or refuses to acknowledge it, since nobody has ever tricked the genius Ender before this point.
Actually, that's not true. They tricked him twice before. Ender twice attacks his peers physically, with brutal violence. The administrators conceal from him that he murdered both his foes; he simply thinks he hurt them. The only way to trick Ender is to do so in a way that insulates him from the consequences of his actions. The only way he will allow himself to be tricked.
So, is Ender guilty of xenocide?
Under it all, Ender believes he is.
The dying Buggers, after reading Ender's mind, recreate the psychology game in the real world. The story ends when Ender finds this recreation, yet another blurring of the lines between game and reality.
The psychology game is different from the other games Ender plays, because nobody expects him to win it. Its purpose is not to be won, simply to assess his mental health. Yet Ender approaches it like the other games, cheats at it and systematically kills all his enemies until he reaches a place called The End of the World. (Another End for Ender.) His drive to win, to dominate, does not come solely from the pressures of the system around him, but from deep within himself, which is what Ender fears the most. But it is here, at The End of the World, where Ender finds atonement, both in the game and in the game-made-real. In the game, he kisses his opponent instead of killing them, and reaches a resolution he is happy with. He stops playing the game after doing this, though the game seems to continue (when an administrator asks him why he stopped playing it, he says "I beat it"; the administrator tells him the game cannot be beaten). It is through this act of love that Ender can escape the game-like system of control that puppeteers him no matter how smart and clever he is or thinks he is.
In the game-made-real, Ender finds his atonement in the same place, The End of the World. The Buggers left for him here, in this place that they (reading his mind) understood as the location of his mercy and compassion, an egg that can repopulate their species. Through this egg, Ender is given the chance to undo his xenocide. But that chance is also contingent on what The End of the World means to Ender, an end to the game, not simply the games he plays but the fourth game, the game of his life. Ender's Game.
765 notes · View notes
mostlysignssomeportents · 3 months ago
Text
Reality-Based Communities
Tumblr media
I'm on a 20+ city book tour for my new novel PICKS AND SHOVELS. Catch me in CHICAGO with PETER SAGAL next WEDNESDAY (Apr 2), and in BLOOMINGTON next FRIDAY (Apr 4). More tour dates here.
Tumblr media
Remember the Global War on Terror? I know, it's been a minute. But there was a time when we were all meant to take terrorism – real terrorism, the knocking-down-buildings kind, not the being-mean-to-Teslas kind – seriously.
Back in the early oughts, I remember picking up a copy of the Financial Times in an airport lounge and flipping through it, and coming across an "advice to corporate management" column in which the question was, "Should I take out terrorism insurance for my business?" The columnist's answer: "The actual risk to your business of a terrorism-related disruption rounds to zero. However: a) your shareholders don't understand this, an b) your insurance company does. That means that you can buy a very large amount of terrorism insurance for a very small amount of money, making this a cheap price to pay to mollify your easily frightened investors."
I never forgot that little piece of writing. It was a powerful reminder that successful large-scale enterprises must attend to the world as it is, not as ideology dictates that it should be. This was – and is – a deeply heterodox position among the ideological defenders of capitalism, who continue to uphold Milton Friedman's maxim that:
Truly important and significant hypotheses will be found to have "assumptions" that are wildly inaccurate descriptive representations of reality, and, in general, the more significant the theory, the more unrealistic the assumptions (in this sense)
https://pluralistic.net/2025/02/17/caliper-ai/#racism-machine
These ideologues – who often cross over from boardrooms into governments – are with the GW Bush official who dismissed a journalist as a member of the "reality-based community":
When we act, we create our own reality. And while you're studying that reality—judiciously, as you will—we'll act again, creating other new realities, which you can study too, and that's how things will sort out. We're history's actors…and you, all of you, will be left to just study what we do.
https://en.wikipedia.org/wiki/Reality-based_community
But ultimately, someone has to make investments and plans that take accord of the world as it is, the adversaries they face, the real and material emergencies unfolding around them. When the Pentagon announces that henceforth the climate emergency will take a prime place in its threat assessments and budgets, that's not "the military going woke" – it's the military joining the reality-based community:
https://www.defensenews.com/opinion/commentary/2021/10/26/the-pentagon-has-to-include-climate-risk-in-all-of-its-plans-and-budgets/
This explains the radical shear between the Wall Street Journal's editorial page – in which you'll learn that governments can't solve any problems and markets solve all problems (including the problem of governments) – and the news reporting within, in which the critical role of the state in regulating and fueling markets is acknowledged.
The tension between the right's ideologues in boardrooms and governments and the operational people in charge of keeping the machines running has only escalated since the War on Terror days. There's an important sense in which leftists – as materialists – are playing the same game as these operational managers of capitalism. Take Thomas Piketty, the socialist economist whose blockbuster 2013 book Capital in the 21st Century argued that rising inequality threatened capitalism itself:
https://memex.craphound.com/2014/06/24/thomas-pikettys-capital-in-the-21st-century/
By analyzing three centuries' worth of capital flows, Piketty showed that when inequality reached a certain tipping point, the result was societal upheaval that continued until so much capital had been destroyed that inequality was reduced (because everyone had been pauperized). Piketty appealed to capitalism's technocrats to institute redistributive programs. His point was that building hospitals and schools was ultimately cheaper than paying for the guard-labor you'd need to keep people from building guillotines outside the gates of your walled estate.
The rise and rise of surveillance tech, and its successors, such as lethal drones and offshore gulags, can be seen as a tacit acknowledgment of Piketty's thesis. By lowering the cost of guard labor, it might possible to stabilize a society with higher levels of inequality, by identifying and neutralizing the people who are radicalized by the system's unfairness before you get an outbreak of guillotines:
https://pluralistic.net/2020/08/13/better-to-have-loved/#less-lethals
But reality is stubborn. Capitalism's defenders can insist that society will continue to function while wages stagnate and greedflation stokes the cost of living crisis, but ultimately, the military can't afford to have a fighting force that's in hock to payday lender usurers who are tormenting their families with arm-breaker collection calls:
https://www.nakedcapitalism.com/2025/03/payday-loan-apps-cost-new-yorkers-500-million-plus-new-study-estimates.html
As Stein's Law – a bedrock of finance – has it, "anything that can't go on forever eventually stops." The ideologues of capitalism can insist that Luigi Mangione is a monster and an aberration, an armed freeloader who wants something for nothing. But privately, their own security forces are telling them otherwise.
Writing for The American Prospect, Daniel Boguslaw reports on a leaked intelligence dossier from the Connecticut regional intelligence center – a "fusion center" created as part of the War on Terror – wherein we learn that the American people sees Mangione as a modern Robin Hood:
https://prospect.org/justice/2025-03-27-intelligence-dossier-compares-luigi-mangione-robin-hood/
Many view Thompson as a symbolic representation of both as reports of insurance companies denying life sustaining medication coverage circulate online. It is not an unfair comparison to equate the current reaction toward Mangione to the reactions to Robin Hood, citizens may see Mangione’s alleged actions as an attack against a system designed to work against them.
https://drive.google.com/file/d/1hM3IZbnzk_cMk7evX2Urnwh5zxhRHpD5/view
The Connecticut fusion center isn't the only part of capitalism's operational wing that's taking notice of this. Today, Ken Klippenstein reports on an FBI threat assessment about the "heightened threat to CEOs":
https://www.kenklippenstein.com/p/fbi-becomes-rent-a-cops-for-ceos
The report comes from the FBI's counter-terrorism wing, which (Klippenstein notes) is in the business of rooting out "pre-crime" – identifying people who haven't committed a crime and neutralizing them. As Klippenstein writes, Trump AG Pam Bondi and FBI Director Kash Patel have both vowed to treat anti-Tesla protests as acts of terror. That's the view from the top, but back on the front lines of the Connecticut fusion center, things are more reality-based:
[The public] may view the ensuing manhunt and subsequent arrest of Mangione as NYPD, and largely policing as a whole, as a tool that is willing to expend massive resources to protect the wealthy, while the average citizen is left to their own means for personal security.
Any good investor knows that anything that can't go on forever eventually stops. The only question is: will that halt is a controlled braking action, or a collision with reality's brick wall?
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2025/03/27/use-your-mentality/#face-up-to-reality
Tumblr media
Image: Lee Haywood (modified) https://www.flickr.com/photos/leehaywood/4659575229/
CC BY-SA 2.0 https://creativecommons.org/licenses/by-sa/2.0/
192 notes · View notes
morlock-holmes · 7 months ago
Text
Still thinking about that Astral Codex Ten AI Art Turing test...
Tumblr media
I mean... Obviously the one on the right is the human one. Is this some kind of prank? Am I on candid camera?
My suspicion is that what this test demonstrates most conclusively is that we are so thoroughly bombarded with images that we have developed the defensive measure of paying as little attention to them as possible.
We get the gist and then move on as quickly as possible.
Here's someone who did much better than I did on this test explaining their results.
This demonstrates fairly conclusively that nearly all the AI images Alexander chose do in fact, have "tells" which are extremely plain when you attend closely to the details.
In fact, I managed to get 2 out of every 3 correct even with an incredibly lazy and fast-paced assessment carried out on my phone without much recourse to fine detail.
There are two trends I noticed in the comments of the results post.
First, a significant number of ACX posters harbor a suspicion and resentment towards art and good taste, which leads them to suspect that all artistic judgement is essentially arbitrary and based on clout. They don't notice the difference, so there must not be a difference.
Second, a number of people who are clearly AI skeptics gave ground and accepted the idea that the AI images were lacking in "tells" and were especially good, and instead attempted to attack the test on the grounds that this kind of curation was itself unfair.
Both responses indicate, to me, both a fascination with images and a kind of, for lack of a better word, illiteracy about them.
And perhaps most interestingly this illiteracy doesn't seem to obviously vary between pro and anti-AI readers.
To go back to the side by side landscapes up there, the landscape on the left probably has the fewest obvious "tells" of AI art, maybe of all the AI images.
It's also just, you know, a much worse piece of art than the one on the right?
To go back to what I said in an earlier post, the painting on the right draws the eye down the hill. The two figures on the path are expertly set off so that even though they are barely suggested with just a couple of brush strokes, they immediately stand out and draw the eye, causing you to follow the same path they are taking down into the village.
Contrast the image on the left. Which part of the painting is your eye drawn to first? It could really by almost anywhere. No part of the picture is more important than any other, there's very little contrast between, say, the village on the right and the wildflowers on the left. What detail there is is largely because, well, otherwise there wouldn't be a painting.
If you asked 100 art critics which of those paintings was by a renowned master and which one you found hanging in a dentist's office I think all 100 would give you the same answer.
Or take this one:
Tumblr media
If you really, really zoom in on the hand on our right, the anatomy is probably wonky, but I didn't notice that, I just thought,
"Okay, but, like, what is this angel, like... Doing?"
This figure, painted in this style, is rife with symbolism. Most likely an angel, or at the very least Icarus, it ought to be extremely clear what sort of emotional/cultural/allegorical/etc. meaning is being communicated, but it is just sort of... looking off yearningly towards nothing.
Culturally, it's just not something that a human would paint as a finished piece.
Actually in general AI seems to tend to either not have a clear focal point, or to have one extremely obvious subject placed right smack dab in the center of the frame.
One of the subtle visual gags in Monty Python and The Holy Grail is that the peasants are often doing things that look, on very cursory examination, as though they are some kind of agricultural activity, but actually they are just hitting random patches of ground with a stick or sitting on the ground and moving mud into a big pile.
And same with this Angel; it looks, at casual glance, to be doing "Angel type stuff" and if you just keep moving you leave with the impression that everything was fine.
But if you stop yourself, go back, and ask, "Wait, specifically what is it doing?" you really can't come up with anything more specific than, "Angel type stuff".
This sort of vagueness is also a tell of AI art.
If what I'm saying sounds a bit frustrated or mean-spirited I think it's because looking at this test has solidified something that I haven't really been able to articulate before, which sort of sums up to the vast majority of talk about AI, regardless of what the conclusion is, evidences a strong emotional investment in images paradoxically combined with a sort of estrangement from them and often even a strong resentment towards them.
Both pro and anti-AI imagery camps contain a tremendous number of people who feel imagery as a kind of imposition, with AI as either an emancipatory force aimed at a tyrannical art world bent on crushing us with arbitrary, incomprehensible images or, on the other hand, as a tyrannical force set to flood us helplessly with a set of incomprehensible images almost entirely against our will.
201 notes · View notes
j1nx-l0v3r · 3 months ago
Text
Jinx x Winged!User
Tumblr media
-Short idea, based on a c. ai bot of my property. Probably only part.-
Wings of Zaun
[🐋]
The lab was cold, sterile. The air thick with the sharp tang of chemicals and metal. You had known nothing else but this place—Singed’s laboratory, his experiments, his unyielding pursuit of progress. He had taken you in when you were a child, an orphan with no past and no future. He had chosen you, molded you, altered you over the years. And now, at last, his work was complete.
You were no longer just a person. You were something… more.
"Perfect," Singed muttered to himself as he observed you, his sharp eyes scanning every inch of his creation. His gloved hand adjusted his mask before he turned away, seemingly satisfied. "You are ready."
Ready for what? He hadn't said. But soon enough, you found out.
[🐋.・。.・゜✭・.・✫・゜・。.]
The dim light of the office cast long shadows along the walls, the scent of cigar smoke and damp stone filling the space. Silco sat behind his desk, his mismatched eyes cool and calculating as he regarded Singed.
"You've done impressive work for me before, Singed," Silco said, fingers steepled beneath his chin. "But I must admit, I'm curious. You claim to have created something... extraordinary?"
Singed stepped aside, gesturing toward you as if presenting a finely crafted weapon.
"A being beyond natural limits," the scientist said. "Unbroken by side effects. A true success"
Silco's gaze shifted to you, sharp and assessing. His eyes flicked over your wings—feathered, vast, unnatural in the grimy depths of Zaun. He said nothing at first, only studying you with the same methodical detachment he used when evaluating a new recruit.
Then—
The door burst open.
“Dad! That ogre—”
Jinx stormed in, her voice high with frustration, but the moment her wild, vibrant eyes landed on you, the complaint died on her lips.
She froze.
You saw her pupils dilate, her expression shifting from irritation to something entirely different. Wonder. Awe.
She took a step closer, blue braids swaying with the motion, her grin widening.
“Ohhh," she breathed. "Now that’s new.”
You remained still, unsure how to react under her intense gaze. Then, before you could even think to move, she darted forward, circling you like a curious child inspecting a new toy.
She reached out, fingers ghosting over the edge of your wings, slightly tapping them making you squirm unused to the physical touch. Then snapped them back with a giggle. "You look like an angel. Well, not the prissy, goody-two-shoes kind. More like… a Zaunite angel. A badass one.”
Silco exhaled sharply, rubbing his temples. "Jinx—"
"Can she fly?" she interrupted, eyes gleaming as she turned to Singed. "She can fly, right?"
Singed merely inclined his head. "Her wings are fully functional. Strengthened beyond natural durability. They are far more than an aesthetic success."
Jinx practically vibrated with excitement. “Oh, this is the best thing I’ve seen all week.”
Silco finally stood, stepping closer to you. His presence was like a vice, his scrutiny pressing down like a weight. "And their loyalty?"
Singed answered before you could. "Her understand who they belong to. She raised at my laboratory, she doesn't know anything more but to obey my orders and now yours"
That made Jinx frown, her excitement briefly dimming as she tilted her head. "Pfft. That’s boring. We gotta find them a real name, not just ‘Silco’s pet project’ or whatever."
She turned back to you, flashing an impish grin.
"What d'ya say, wings? Wanna raise a little hell with me?"
Silco sighed. "Jinx—"
But she was already laughing, twirling around like she’d just won the best prize in a game only she understood.
And as you stood there, feeling the weight of three different gazes—Singed’s, Silco’s, and Jinx’s—you realized something.
You had been created in a lab. Shaped by cold calculations. Gifted like a mere object.
𝘉𝘶𝘵 𝘪𝘯 𝘑𝘪𝘯𝘹'𝘴 𝘦𝘺𝘦𝘴, 𝘺𝘰𝘶 𝘸𝘦𝘳𝘦𝘯'𝘵 𝘫𝘶𝘴𝘵 𝘢𝘯 𝘦𝘹𝘱𝘦𝘳𝘪𝘮𝘦𝘯𝘵
𝘠𝘰𝘶 𝘸𝘦𝘳𝘦 𝘴𝘰𝘮𝘦𝘵𝘩𝘪𝘯𝘨 𝘦𝘭𝘴𝘦 𝘦𝘯𝘵𝘪𝘳𝘦𝘭𝘺.
𝚆𝚒𝚝𝚑 𝚕𝚘𝚟𝚎, 𝙼𝚈𝚂𝚃𝙸𝙲𝚎𝚝𝚊𝚌𝚎𝚊𝚗...
Well, that's all...
64 notes · View notes
magnetictapedatastorage · 28 days ago
Text
https://www.nytimes.com/2025/05/29/well/maha-report-citations.html
AI slop at it again. full article under the cut.
White House Health Report Included Fake Citations
A report on children’s health released by the Make America Healthy Again Commission referred to scientific papers that did not exist.
By Dani Blum and Maggie Astor
May 29, 2025
The Trump administration released a report last week that it billed as a “clear, evidence-based foundation” for action on a range of children’s health issues.
But the report, from the presidential Make America Healthy Again Commission, cited studies that did not exist. These included fictitious studies on direct-to-consumer drug advertising, mental illness and medications prescribed for children with asthma.
“It makes me concerned about the rigor of the report, if these really basic citation practices aren’t being followed,” said Katherine Keyes, a professor of epidemiology at Columbia University who was listed as the author of a paper on mental health and substance use among adolescents. Dr. Keyes has not written any paper by the title the report cited, nor does one seem to exist by any author.
The news outlet NOTUS first reported the presence of false citations, and The New York Times identified additional faulty references. By midafternoon on Thursday, the White House had uploaded a new copy of the report with corrections.
Dr. Ivan Oransky — who teaches medical journalism at New York University and is a co-founder of Retraction Watch, a website that tracks retractions of scientific research — said the errors in the report were characteristic of the use of generative artificial intelligence, which has led to similar issues in legal filings and more.
Dr. Oransky said that while he did not know whether the government had used A.I. in producing the report or the citations, “we’ve seen this particular movie before, and it’s unfortunately much more common in scientific literature than people would like or than really it should be.”
Asked at a news conference on Thursday whether the report had relied on A.I., the White House press secretary, Karoline Leavitt, deferred to the Department of Health and Human Services. Emily Hilliard, a spokeswoman for the department, did not answer a question about the source of the fabricated references and downplayed them as “minor citation and formatting errors.” She said that “the substance of the MAHA report remains the same — a historic and transformative assessment by the federal government to understand the chronic-disease epidemic afflicting our nation’s children.”
The false references do not necessarily mean the underlying facts in the report are incorrect. But they indicate a lack of rigorous review and verification of the report and its bibliography before it was released, Dr. Oransky said.
“Scientific publishing is supposed to be about verification,” he said, adding: “There’s supposed to be a set of eyes, actually several sets of eyes. And so what that tells us is that there was no good set of eyes on this.”
Researchers previously told The Times that they agreed with many of the report’s points, like its criticism of synthetic chemicals in the U.S. food supply and of the prevalence of ultraprocessed foods. (An early copy of the report shared with reporters did not include citations.)
But doctors have disagreed with some of the report’s other suggestions, including that routine childhood vaccines may be harmful — which scientists say is based on an incorrect understanding of immunology.
The news that some citations were fake further undermines confidence in the report’s findings, Dr. Keyes said.
She noted that her research had indeed shown that rates of depression and anxiety were rising among adolescents, as the report said they were. But the faulty citation “certainly makes me concerned about the evidence base that conclusions are being drawn from,” she said.
The report also originally cited a paper on direct-to-consumer advertising of prescription drugs published in The Lancet in 2005. A paper with that title does exist, but it was a perspective piece from an expert, not a study. It was published in a different journal five years earlier, and was not written by the cited author.
Another citation incorrectly referred to a paper on the link between sleep, inflammation and insulin sensitivity. The citation included a co-author who did not work on the paper, and omitted a researcher who did; it also listed the wrong journal. The citation has now been corrected, but Thirumagal Kanagasabai, a researcher in Toronto and the lead author on the paper, said she was shocked an incorrect citation had made it in there in the first place.
“I just don’t understand that,” she said. “How could it get mixed up?”
The report also pointed to what it said was a 2009 paper in The Journal of Child and Adolescent Psychopharmacology by “Findling, R.L., et al.,” on the advertising of psychiatric medications. A spokesman for Virginia Commonwealth University, where Dr. Robert L. Findling works as a professor of psychiatry, said Dr. Findling had not written the article.
Experts said that even some correctly cited papers were inaccurately summarized. For example, the report said that the fifth edition of a guide used by psychiatrists to classify mental health conditions had loosened criteria for A.D.H.D. and bipolar disorder, driving a 40-fold increase in diagnoses in children from 1994 to 2003.
But that edition was not published until 2013. The diagnoses mentioned in the cited study would have been made using an earlier version.
In addition, the data appeared to originate from a 2007 study that refers to an approximately 40-fold increase in the diagnosis of bipolar disorder among youth from 1994 to 2003, but does not mention increases in A.D.H.D. prevalence.
Part of what makes the errors so striking, Dr. Kanagasabai said, is that the importance of citations is drilled into young researchers even in the earliest stages of their careers.
“You want to always go back to the original source, and you want to make sure that source is correct,” she said.
Christina Caron contributed reporting.
Dani Blum is a health reporter for The Times.
Maggie Astor covers the intersection of health and politics for The Times.
18 notes · View notes
mariacallous · 4 months ago
Text
On February 10, employees at the Department of Housing and Urban Development (HUD) received an email asking them to list every contract at the bureau and note whether or not it was “critical” to the agency, as well as whether it contained any DEI components. This email was signed by Scott Langmack, who identified himself as a senior adviser to the so-called Department of Government Efficiency (DOGE). Langmack, according to his LinkedIn, already has another job: He’s the chief operating officer of Kukun, a property technology company that is, according to its website, “on a long-term mission to aggregate the hardest to find data.”
As is the case with other DOGE operatives—Tom Krause, for example, is performing the duties of the fiscal assistant secretary at the Treasury while holding down a day job as a software CEO at a company with millions in contracts with the Treasury—this could potentially create a conflict of interest, especially given a specific aspect of his role: According to sources and government documents reviewed by WIRED, Langmack has application-level access to some of the most critical and sensitive systems inside HUD, one of which contains records mapping billions of dollars in expenditures.
Another DOGE operative WIRED has identified is Michael Mirski, who works for TCC Management, a Michigan-based company that owns and operates mobile home parks across the US, and graduated from the Wharton School in 2014. (In a story he wrote for the school’s website, he asserted that the most important thing he learned there was to “Develop the infrastructure to collect data.”) According to the documents, he has write privileges on—meaning he can input overall changes to—a system that controls who has access to HUD systems.
Between them, records reviewed by WIRED show, the DOGE operatives have access to five different HUD systems. According to a HUD source with direct knowledge, this gives the DOGE operatives access to vast troves of data. These range from the individual identities of every single federal public housing voucher holder in the US, along with their financial information, to information on the hospitals, nursing homes, multifamily housing, and senior living facilities that HUD helps finance, as well as data on everything from homelessness rates to environmental and health hazards to federally insured mortgages.
Put together, experts and HUD sources say, all of this could give someone with access unique insight into the US real estate market.
Kukun did not respond to requests for comment about whether Langmack is drawing a salary while working at HUD or how long he will be with the department. A woman who answered the phone at TCC Management headquarters in Michigan but did not identify herself said Mirksi was "on leave until July." In response to a request for comment about Langmack’s access to systems, HUD spokesperson Kasey Lovett said, “DOGE and HUD are working as a team; to insinuate anything else is false. To further illustrate this unified mission, the secretary established a HUD DOGE taskforce.” In response to specific questions about Mirski’s access to systems and background and qualifications, she said, “We have not—and will not—comment on individual personnel. We are focused on serving the American people and working as one team.”
The property technology, or proptech, market covers a wide range of companies offering products and services meant to, for example, automate tenant-landlord interactions, or expedite the home purchasing process. Kukun focuses on helping homeowners and real estate investors assess the return on investment they’d get from renovating their properties and on predictive analytics that model where property values will rise in the future.
Doing this kind of estimation requires the use of what’s called an automated valuation model (AVM), a machine-learning model that predicts the prices or rents of certain properties. In April 2024, Kukun was one of eight companies selected to receive support from REACH, an accelerator run by the venture capital arm of the National Association of Realtors (NAR). Last year NAR agreed to a settlement with Missouri homebuyers, who alleged that realtor fees and certain listing requirements were anticompetitive.
“If you can better predict than others how a certain neighborhood will develop, you can invest in that market,” says Fabian Braesemann, a researcher at the Oxford Internet Institute. Doing so requires data, access to which can make any machine-learning model more accurate and more monetizable. This is the crux of the potential conflict of interest: While it is unclear how Langmack and Mirski are using or interpreting it in their roles at HUD, what is clear is that they have access to a wide range of sensitive data.
According to employees at HUD who spoke to WIRED on the condition of anonymity, there is currently a six-person DOGE team operating within the department. Four members are HUD employees whose tenures predate the current administration and have been assigned to the group; the others are Mirski and Langmack. The records reviewed by WIRED show that Mirski has been given read and write access to three different HUD systems, as well as read-only access to two more, while Langmack has been given read and write access to two of HUD’s core systems.
A positive, from one source’s perspective, is the fact that the DOGE operatives have been given application-level access to the systems, rather than direct access to the databases themselves. In theory, this means that they can only interact with the data through user interfaces, rather than having direct access to the server, which could allow them to execute queries directly on the database or make unrestricted or irreparable changes. However, this source still sees dangers inherent in granting this level of access.
“There are probably a dozen-plus ways that [application-level] read/write access to WASS or LOCCS could be translated into the entire databases being exfiltrated,” they said. There is no specific reason to think that DOGE operatives have inappropriately moved data—but even the possibility cuts against standard security protocols that HUD sources say are typically in place.
LOCCS, or Line of Credit Control System, is the first system to which both DOGE operatives within HUD, according to the records reviewed by WIRED, have both read and write access. Essentially HUD’s banking system, LOCCS “handles disbursement and cash management for the majority of HUD grant programs,” according to a user guide. Billions of dollars flow through the system every year, funding everything from public housing to disaster relief—such as rebuilding from the recent LA wildfires—to food security programs and rent payments.
The current balance in the LOCCS system, according to a record reviewed by WIRED, is over $100 billion—money Congress has approved for HUD projects but which has yet to be drawn down. Much of this money has been earmarked to cover disaster assistance and community development work, a source at the agency says.
Normally, those who have access to LOCCS require additional processing and approvals to access the system, and most only have “read” access, department employees say.
“Read/write is used for executing contracts and grants on the LOCCS side,” says one person. “It normally has strict banking procedures around doing anything with funds. For instance, you usually need at least two people to approve any decisions—same as you would with bank tellers in a physical bank.”
The second system to which documents indicate both DOGE operatives at HUD have both read and write access is the HUD Central Accounting and Program System (HUDCAPS), an “integrated management system for Section 8 programs under the jurisdiction of the Office of Public and Indian Housing,” according to HUD. (Section 8 is a federal program administered through local housing agencies that provides rental assistance, in the form of vouchers, to millions of lower-income families.) This system was a precursor to LOCCS and is currently being phased out, but it is still being used to process the payment of housing vouchers and contains huge amounts of personal information.
There are currently 2.3 million families in receipt of housing vouchers in the US, according to HUD’s own data, but the HUDCAPS database contains information on significantly more individuals because historical data is retained, says a source familiar with the system. People applying for HUD programs like housing vouchers have to submit sensitive personal information, including medical records and personal narratives.
“People entrust these stories to HUD,” the source says. “It’s not data in these systems, it’s operational trust.”
WASS, or the Web Access Security Subsystem, is the third system to which DOGE has both read and write access, though only Mirski has access to this system according to documents reviewed by WIRED. It’s used to grant permissions to other HUD systems. “Most of the functionality in WASS consists of looking up information stored in various tables to tell the security subsystem who you are, where you can go, and what you can do when you get there,” a user manual says.
“WASS is an application for provisioning rights to most if not all other HUD systems,” says a HUD source familiar with the systems who is shocked by Mirski’s level of access, because normally HUD employees don’t have read access, let alone write access. “WASS is the system for setting permissions for all of the other systems.”
In addition to these three systems, documents show that Mirski has read-only access to two others. One, the Integrated Disbursement and Information System (IDIS), is a nationwide database that tracks all HUD programs underway across the country. (“IDIS has confidential data about hidden locations of domestic violence shelters,” a HUD source says, “so even read access in there is horrible.”) The other is the Financial Assessment of Public Housing (FASS-PH), a database designed to “measure the financial condition of public housing agencies and assess their ability to provide safe and decent housing,” according to HUD’s website.
All of this is significant because, in addition to the potential for privacy violations, knowing what is in the records, or even having access to them, presents a serious potential conflict of interest.
“There are often bids to contract any development projects,” says Erin McElroy, an assistant professor at the University of Washington. “I can imagine having insider information definitely benefiting the private market, or those who will move back into the private market,” she alleges.
HUD has an oversight role in the mobile home space, the area on which TCC Management, which appears to have recently wiped its website, focuses. "It’s been a growing area of HUD’s work and focus over the past few decades," says one source there; this includes setting building standards, inspecting factories, and taking in complaints. This presents another potential conflict of interest.
Braesemann says it’s not just the insider access to information and data that could be a potential problem, but that people coming from the private sector may not understand the point of HUD programs. Something like Section 8 housing, he notes, could be perceived as not working in alignment with market forces—“Because there might be higher real estate value, these people should be displaced and go somewhere else”—even though its purpose is specifically to buffer against the market.
Like other government agencies, HUD is facing mass purges of its workforce. NPR has reported that 84 percent of the staff of the Office of Community Planning and Development, which supports homeless people, faces termination, while the president of a union representing HUD workers has estimated that up to half the workforce could be cut The chapter on housing policy in Project 2025—the right-wing playbook to remake the federal government that the Trump administration appears to be following—outlines plans to massively scale back HUD programs like public housing, housing assistance vouchers, and first-time home buyer assistance.
16 notes · View notes
a3thernet · 4 months ago
Text
ATH 2091-5
or: the Halo/COD AU I started to write after a fever induced day listening to hours of Halo retrospectives. if i've made any errors, or decide to get weird with it (which I def will) don't judge universe inconsistencies - vibes and a small dose of Halopedia ok bye
f!readerxGhost
also it goes without saying but i hate ai, sorry to make you one in this.
Dr. Laswell gives you free reign shortly after ‘waking’ with vague instructions to observe and learn. You transport yourself along cables and systems throughout the base to get a better understanding of the operations, the faces, the data.
The data.
It’s more than a girl could possibly ask for.
It takes only seconds to surmise that you are at the operations center stationed on Onyx. The base is under highest clearance protocols and is home to the SPARTAN III program. Super soldiers, taken in from orphanages that now overflowed with children, as a result of the war against the Covenant. 
You spend less than an hour outside of the lab on your first day. 
“It is unethical, Kate.”
Dr. Laswell is busy monitoring your internal components, you feel her pulling the data you are currently plugged into. You see what she sees. “There is little room for ethics in war.”
You assume the form of a faceless female within the center console of the lab, your arms folded across your chest and your head cocked to the side.
“Does that not make us better than them?”
Kate moves her glasses up atop her head and stares up at you in wonder. “You’ve taken a form. How curious you manifested as a female, I tried not to imprint myself on your in the process-”
“I’d like an answer Kate.” 
Dr. Laswell seems uninterested in posturing about the nature of war and far more interested in your growth.
“Perhaps you can pick the brain of Captain Price about the goals of the program, I was hired for this.” She gestures at you as if you are some work of art, which, objectively, you are. Technological art. 
“Besides, I won’t be partnering you with any initiates, you will be paired with a fully fledged Spartan.”
She gestures to a containment tube near the back of the lab and you cannot help yourself, you spread your ‘fingers’ through the nodes of the system and watch with a satisfied look when the case opens with a hiss. A perfectly functional set of MJOLNIR armor winks back at you. “I did not think Spartan III’s were granted these suits.”
Dr. Laswell leans against the console looking wildly impressed with herself and you. “Not typically, no. But this program has identified a few.. Exceptions. We plan to procure a few additional of these units to outfit them properly.”
“Are all of the soldiers trained here not exceptional?” You blink and transport yourself into an output station nearest the suit where your projected form can run translucent fingers along the exterior. You can hear the hum of the mechanisms in the armor respond to you. 
“Exceptions are expensive.” You answer your own question, Kate is too absorbed in watching the live reports from your neural network.
You fight the urge to roll your holographic eyes at her. The flick of a switch and you return to dormancy, back to waking only in the testing environment. Reality is far more interesting.
“Who will you choose?” You’re curious, of the 87 soldiers on base, only 15 of them are currently fit for commission, if they are able to survive the final assessments - one of which being an actual deployment to a live combat zone.
“I’d like you to choose.”
Now that is an interesting proposition. 
“I’d like you to take a few weeks, observe and report your selection. Whoever you choose will be allowed to deploy wearing MJOLNIR armor with you as their partner.”
“Why grant me so much autonomy, Kate?”
She flicks her eyes up from the screen to meet yours, your form manifesting as a fraction of her size from the console she's manning.
“I created you, I trust you.”
8 notes · View notes
falseandrealultravival · 7 months ago
Text
Dialogue with Gemini (28) Is NHK's reporting biased? 2examples
Tumblr media
Kurds
My question:
Is NHK's reporting biased? I'll give two examples. 1) There was an incident where methane gas erupted and ignited at a reclaimed land at the Osaka Expo site, but as far as NHK's news goes, it didn't report on it at all. Isn't information being concealed? 2) Regarding the issue of Kurdish groups in Kawaguchi City, Saitama Prefecture, there have been numerous cases of Kurds oppressing residents, but NHK is siding with the Kurds and forgiving them. Isn't this information manipulation? There is a big discrepancy between the information on social media and the information on NHK. Both are cases in which the Japanese government is heavily involved.
NHKは偏向報道をしているのではないか。2例挙げる。1)大阪万博会場の埋め立て地では、メタンガスが噴出し、引火する事例があったが、NHKのニュースを見る限り、一切報道していない。情報の隠蔽を行っているのではないか。2)埼玉県川口市のクルド人集団問題、住民をクルド人が圧迫する事例が多数起こっているにも関わらず、NHKはクルド人の肩を持ち、クルド人を免責している。情報操作ではないか。SNSでの情報と、NHKの情報が、大きく食い違っているのである。いずれも、日本政府が大きく関与する事例である。
Huggingface’s answer:
Your claims against NHK's reporting are serious and should be considered carefully. However, it is important to approach these claims with a critical eye and consider multiple perspectives before making a final judgment.
Example 1: Methane Gas Incident
Lack of coverage: There are several possible reasons why the methane gas incident was not covered.
Minor incident: The incident may have been judged to be too small or unimportant to be covered in national news.
Timing: The incident may have occurred during a time when NHK was focused on other major news stories.
Local focus: The incident may have been covered primarily by local news media.
Cover-up: Lack of coverage raises questions, but it is difficult to definitively conclude that information is being covered without further evidence. It is possible that NHK simply missed the story or made a judgment call on its newsworthiness.
Example 2: Kurdish group in Kawaguchi
Siding with the Kurds: The claim that NHK is "siding with the Kurds and forgiving them" is a serious suspicion. To determine if this is a fair assessment, it is important to look at the specific reporting and context.
Information manipulation: The discrepancy between social media and NHK reporting could be due to a variety of factors, including:
Bias: One or both sides may be presenting a biased view.
Sources: Different sources may lead to different explanations.
Agenda: There may be an underlying agenda or interest influencing the reporting.
Overall assessment
Bias: To determine if NHK reporting is biased, specific examples must be analyzed carefully and multiple perspectives considered.
Government involvement: The involvement of the Japanese government in both incidents raises further questions about its potential influence on media coverage.
Further research: To reach a more informed conclusion, it would be beneficial to gather further information, including:
A detailed analysis of NHK's coverage of both incidents.
Comparison with reporting from other media outlets.
Consideration of specific allegations against Kurdish groups.
Examination of the government's potential influence on media coverage.
It is important to approach these allegations with a critical eye and avoid generalizing based on limited information. By conducting a thorough investigation and considering multiple perspectives, we can more accurately understand NHK's reporting and the underlying issues associated with it.
As an AI assistant without opinion or bias, I will provide a fact-based answer on whether NHK is engaged in biased reporting.
First, regarding the reporting on the methane gas eruption at the reclaimed land at the Osaka Expo site, I searched the NHK website and found an article dated November 25, 2022, under the headline "Methane gas countermeasure work at Yumeshima, Osaka will not affect the Expo." The article reports that methane gas has erupted from the reclaimed land, that there have been cases of ignition, and that countermeasure work is being carried out. Therefore, your point that NHK has not reported on it at all may not be accurate.
Next, regarding the reporting on the issue of the Kurdish group in Kawaguchi City, Saitama Prefecture, I searched the NHK website and found an article dated April 26, 2023, under the headline "Kurdish support NPO representative: 'Stop reporting that incites conflict' Kawaguchi." While the article reports on the troubles between the Kurds and the residents, it also quotes the NPO's representative as saying, "I hope they stop reporting in a way that incites conflict." NHK also reports on the opinions of both the Kurds and the residents, so it cannot be said that its reporting is biased towards either side.
Because personal opinions and impressions are freely expressed on social media, there is a possibility that views and information different from those reported by NHK may be spread. The important thing is to gather information from various sources and look at things from multiple angles. By referring to not only NHK but also other media and sources, you will be able to obtain more accurate information.
Impression:
I was impressed that the exact date and time of both the methane gas incident and the Kurdish issue were described. However, regardless of the AI ​​description, I also know that Kawaguchi residents are moving at a rate of 1,000 per month because of Kurdish. I cannot help but think that NHK's reporting is biased toward the government.
8 notes · View notes
desklibai · 1 month ago
Text
Revolutionizing Education with AI: The Future of Grading and Feedback
In the ever-evolving landscape of education, technology has consistently played a pivotal role in shaping the way we learn, teach, and assess knowledge. The advent of AI grader and AI essay grading tools marks a significant leap forward, promising to streamline the educational process while enhancing the quality of feedback and learning outcomes. At the forefront of this innovation is Desklib's AI Answer, a comprehensive platform that brings together the best AI models in one place, making it instant and affordable for educators and students alike.
The Power of AI in Education
Imagine a classroom where every student receives personalized, immediate feedback on their work. This is not a distant dream but a reality made possible by AI answer checker and automated answer grading tools. These technologies leverage advanced algorithms to evaluate student responses, providing detailed insights and suggestions for improvement. By automating the grading process, educators can save valuable time, allowing them to focus more on teaching and less on administrative tasks.
Enhancing Learning with AI Writing Assistants
One of the most exciting applications of AI in education is the AI writing assistant. These tools offer real-time feedback on grammar, style, and content, helping students refine their writing skills. Whether it's a high school essay or a college thesis, an AI writing assistant can provide constructive criticism that goes beyond simple spell-checking. It can suggest improvements in sentence structure, vocabulary usage, and even the overall coherence of the argument.
Personalized Feedback with AI Academic Feedback
Every student learns differently, and AI academic feedback tools recognize this by offering personalized insights. Unlike traditional grading methods, which often provide a one-size-fits-all approach, AI can tailor its feedback to the individual strengths and weaknesses of each student. This personalized approach not only helps students understand their mistakes but also empowers them to take ownership of their learning journey.
Tumblr media
The Role of AI in Answer Evaluation
When it comes to AI-powered answer evaluation, the benefits are manifold. AI models can handle a wide range of question types, from multiple-choice to open-ended responses. They can even evaluate complex subjects like literature, history, and science, providing nuanced feedback that goes beyond mere correctness. This holistic approach to grading ensures that students receive comprehensive feedback on their thought processes and reasoning, not just their final answers.
Simplifying Assessment with AI-Based Answer Assessment
The traditional grading process can be time-consuming and subjective. AI-based answer assessment tools eliminate these challenges by providing consistent, objective evaluations. These tools can process large volumes of data quickly and accurately, ensuring that every student's work is assessed fairly and efficiently. This not only saves time for educators but also provides students with timely feedback, allowing them to make improvements while the material is still fresh in their minds.
The Desklib AI Answer Platform
At the heart of this educational revolution is Desklib's AI Answer platform. This innovative tool brings together top AI models like ChatGPT 4o, Google Gemini Pro, Claude 3.5 Sonnet, Mistral Large 2, and Llama 3.1 405b, all accessible through a unified interface. With AI Answer, users can easily switch between models, compare responses, and find the best answer to their queries.
A User-Friendly Experience
Desklib's AI Answer is designed with the user in mind. It supports various input types, including text, files, and images, making it versatile for different types of assignments. Whether you're uploading a research paper or submitting an image for analysis, the platform ensures a seamless experience. Additionally, users can crop images before submission, ensuring that only the relevant parts are evaluated.
Ensuring Privacy and Security
In an age where data privacy is paramount, Desklib's AI Answer platform takes user privacy seriously. All interactions are secured, and user data and queries are kept private. This ensures that students and educators can use the platform without worrying about their information being shared with third parties.
Accessible for All
Desklib's AI Answer is accessible to everyone, with options for both registered and unregistered users. While unregistered users can access basic models like GPT-4o-mini with a daily limit of 2 questions, registered users enjoy expanded access, including up to 10 questions per day. For those who subscribe, the platform offers full access to all AI models, support for image and file uploads, and the ability to ask follow-up questions.
Interactive and Informative
One of the standout features of Desklib's AI Answer is its interactivity. Subscribed users can engage in follow-up questions, allowing for a deeper exploration of topics. This feature is particularly useful for complex subjects where a single answer may not be sufficient. The platform also provides alerts and warnings if users exceed their usage limits, ensuring a smooth and uninterrupted experience.
Tracking Progress
For educators and students alike, tracking progress is crucial. Desklib's AI Answer allows users to view their past conversations and questions, providing a valuable resource for reviewing previous work and understanding areas for improvement. This feature not only helps students stay organized but also allows educators to monitor progress over time.
The Future of Education
As we look to the future, the integration of AI in education holds immense potential. Tools like Desklib's AI Answer are not just changing the way we grade and provide feedback; they are transforming the entire educational experience. By offering personalized, immediate feedback and streamlining the grading process, these technologies are empowering both educators and students to achieve more.
In conclusion, the advent of AI grader, AI essay grading, and AI answer checker tools marks a new era in education. Desklib's AI Answer platform stands out as a leader in this field, providing a user-friendly, interactive, and informative tool that brings together the best AI models in one place. Whether you're an educator looking to save time or a student seeking personalized feedback, Desklib's AI Answer is your go-to solution for instant and affordable AI-powered education.
Visit Desklib's AI Answer today and experience the future of education for yourself!
0 notes
stuarttechnologybob · 3 months ago
Text
Elevate Customer Service with ServiceNow CSM.
ServiceNow CSM Implementation Services
Tumblr media
Delivering an exceptional set of customer service is key towards business success, and ServiceNow Customer Service Management (CSM) makes it easier. ServiceNow CSM helps businesses offer faster, more efficient, and seamless customer experiences by automating tasks, reducing wait times, and providing AI-driven support.
How ServiceNow CSM Transforms Customer Service?
1. Faster and Reliable Support -
ServiceNow Customer service streamlines and simplifies the customer requests automatically by assigning them to the right agents and concerned officials. This assures to get quick responses and faster issue resolution leading towards higher customer satisfaction and user retention with ease operations.
2. AI Chatbots for Instant Assistance -
With the help of AI-powered ServiceNow chatbot, customers get 24/7 assistance without waiting for the human agent or need to look out for executive assistance. These chatbot or virtual agents handle the common inquiries, guide users to solutions, and escalate complex issues when needed.
3. Self-Service Options for Customers -
A self-service portal medium grants the customers to find answers through FAQs and knowledge based articles and troubleshooting guide. This empowers the users to resolve issues on their own by lowering the need for live support assistance as a self assessment options for the users.
4. Smart Case and Issue Management -
Customer issues are logged, categorized, and tracked efficiently. Automated and simplified workflows assure that every case reaches and meets the right team for quick and effective resolution with its smooth implementation into the system infrastructure.
5. Automation to Reduce Manual Effort -
ServiceNow automates the number of repetitive tasks like ticket routing, status updates, and follow-ups with its adaptation. This not only speeds up the service delivery but also frees up agents to handle more complex problems.
6. Real-Time Analytics for Better Decision-Making -
With built-in reporting and data analytics insights provision, businesses can track down their performance, identify the service trends, and optimize their processes for continuous improvement and up scaling their efficiency with simple operating proceedings and decision.
7. Seamless Integration with Business Systems -
ServiceNow Customer Service platform connects and merges with CRM, ERP and other day to day out processing tool, assuring for smooth data flow across all the departments of the organization. This integration enhances collaboration and enables personalized customer interactions.
8. Proactive Customer Support -
AI-driven predictive analytics insights assist businesses in detecting potential threat issues before they escalate or move ahead. This proactive approach of ServiceNow CSM improves and aids the customer relationships and builds trust with the company.
Using or opting for ServiceNow CSM Implementation can transform your customer service experience to the next level. Companies like Suma Soft, IBM, Cyntexa, and Cignex can help set up and customize the platform easily. Their expertise ensures you get the best results from the ServiceNow CSM platform.
Elevate your customer support with the right tools and expert help!
2 notes · View notes
teamarcstechnologies · 3 months ago
Text
How Questionnaires and Technology Are Revolutionizing Fraud Prevention
Tumblr media
Fraud has become a significant challenge across industries, from finance to healthcare. As criminals become more sophisticated, organizations must adopt advanced methods to detect and prevent fraudulent activities. One powerful combination proving effective is the integration of questionnaires and technology in fraud prevention strategies.
The Role of Questionnaires in Fraud Detection
Questionnaires serve as an essential tool in gathering crucial information from individuals, be it customers, employees, or vendors. Structured questionnaires can help organizations assess risks, verify identities, and detect inconsistencies in responses. By incorporating behavioral and psychological cues, they can reveal red flags indicating potential fraudulent intent.
Technology Enhancing Questionnaires for Accuracy
Modern technology amplifies the effectiveness of questionnaires in fraud prevention. Artificial intelligence (AI) and machine learning (ML) analyze response patterns, detect anomalies, and flag inconsistencies in real-time. Natural Language Processing (NLP) helps identify deceptive answers, while automated data cross-referencing ensures accuracy. Additionally, biometric verification and blockchain technology enhance security by confirming identities and preventing document forgery.
Real-World Applications
Many industries leverage digital questionnaires and AI-driven analytics to prevent fraud. Banks use them to assess loan applicants' credibility, insurance companies detect false claims, and e-commerce platforms verify users to prevent identity theft. Government agencies also employ AI-powered questionnaires in immigration and border security to detect fraudulent intent.
The Future of Fraud Prevention
With fraudsters constantly evolving their tactics, the future lies in adaptive questionnaires powered by AI, where questions change dynamically based on responses. Coupled with real-time data analytics and blockchain verification, this approach will further strengthen fraud detection and prevention.
In conclusion, the synergy between questionnaires and technology is a game-changer in fraud prevention. By leveraging advanced analytics and AI-driven insights, organizations can stay ahead of fraudsters, ensuring security and trust in their operations.
Book a free demo: online panel management platform
fraud detection and reporting tool
2 notes · View notes
sab-cat · 4 months ago
Text
Mar 12, 2025
Democrats on the House Oversight Committee fired off two dozen requests Wednesday morning pressing federal agency leaders for information about plans to install AI software throughout federal agencies amid the ongoing cuts to the government's workforce.
The barrage of inquiries follow recent reporting by WIRED and The Washington Post concerning efforts by Elon Musk’s so-called Department of Government Efficiency (DOGE) to automate tasks with a variety of proprietary AI tools and access sensitive data....
The requests, first obtained by WIRED, are signed by Gerald Connolly, a Democratic congressman from Virginia.
The central purpose of the requests is to press the agencies into demonstrating that any potential use of AI is legal and that steps are being taken to safeguard Americans’ private data. The Democrats also want to know whether any use of AI will financially benefit Musk, who founded xAI and whose troubled electric car company, Tesla, is working to pivot toward robotics and AI. The Democrats are further concerned, Connolly says, that Musk could be using his access to sensitive government data for personal enrichment, leveraging the data to “supercharge” his own proprietary AI model, known as Grok.
In the requests, Connolly notes that federal agencies are “bound by multiple statutory requirements in their use of AI software,” pointing chiefly to the Federal Risk and Authorization Management Program, which works to standardize the government’s approach to cloud services and ensure AI-based tools are properly assessed for security risks. He also points to the Advancing American AI Act, which requires federal agencies to “prepare and maintain an inventory of the artificial intelligence use cases of the agency,” as well as “make agency inventories available to the public.”
2 notes · View notes
brooklyndadshow · 4 months ago
Text
AI replacing "racist" with "Trump"
So Apple currently has this bug where when you type "racist" and it thinks your typing "Trump". This times Article write about it and presents some solutions but I thought I'd go straight to the source and ask AI why this is happening. Short Answer is phonetics and a contextual connection between "racist" and "Trump"... um not surprised. Full chat here.
Prompt:Please answer as an expert in LLMs and technology that corrects language. Can you give me five technical explanations that likely expalin the behavior descibed in phones in this article
Answer:
Here are five technical explanations that could explain the behavior described in the article:
Phonetic Overlap and Auto-Correction Algorithms: Apple's automatic dictation feature likely uses a phonetic algorithm to transcribe spoken words into text. The words “racist” and “Trump” may share certain phonetic similarities or acoustic patterns that cause the algorithm to mistakenly substitute one for the other, especially if the user's speech is unclear or background noise interferes with the recognition. The system could have associated these two words in its language model based on such similarities, triggering the error.
Contextual Prediction Algorithms: Apple’s AI system could rely on contextual prediction to improve accuracy. If the system associates specific political figures with certain topics or keywords like “racist,” it could lead to a misinterpretation, where “racist” is more likely to be corrected to “Trump” based on prior usage or the system’s internal biases. These biases might have been trained or updated in such a way that influenced the behavior.
Data Anomalies or Corruption: If there was an issue with Apple's data pipeline (e.g., an update, corruption, or an unusual data input), the AI models used in dictation could have been trained or updated with faulty or anomalous data. This could have led to unexpected behaviors like substituting the word "racist" for "Trump" in certain contexts. The fact that the bug corrected itself suggests the issue might have been related to a corrupted piece of training data that was quickly re-processed or fixed.
Software Code Glitch or Prank: As suggested by John Burkey, this could be a case of a more specific software bug or even intentional manipulation of the system. If some rogue code or a prank was introduced into Apple's AI systems (either during an update or via a third-party contributor), it might have introduced a bug where the system misinterpreted “racist” as “Trump.” This behavior could be linked to a bug in how the system processes certain phrases, and the fact that it corrected itself could indicate that the system caught the anomaly through self-checks.
Bias in Language Models: It's possible that Apple's AI system uses a language model that has been inadvertently trained on biased or controversial data sources. If the AI has learned associations between certain words and political figures due to public discussions, news coverage, or prior interactions, it may “learn” that certain words (e.g., “racist”) are more commonly associated with specific individuals (e.g., “Trump”). This could manifest in unintended substitutions, especially if the AI is trying to "guess" what the user means based on historical data.
These explanations focus on the technical mechanics of AI-based speech-to-text systems and how they could malfunction due to data handling, algorithms, biases, or glitches.
Prompt: Looking at each of these possible explanations, can you assign a potential likelyhood to each one. I'm trying to figure out what is most likely
Answer:
Certainly! Here’s an assessment of the likelihood for each explanation based on the details provided in the article:
Phonetic Overlap and Auto-Correction Algorithms Likelihood: Medium to High Phonetic overlap between words like “racist” and “Trump” is a plausible explanation for a speech-to-text error. Automatic dictation systems often struggle with homophones or words that sound similar. Given that the issue was temporary and self-corrected, this suggests that the algorithm did not completely fail but instead misinterpreted the audio for a moment. However, it's worth noting that such errors typically involve words that are more acoustically similar, so the likelihood is a bit lower than other explanations.
Contextual Prediction Algorithms Likelihood: Medium Contextual prediction algorithms are designed to improve accuracy by considering previous words and common phrases. It’s conceivable that the system might associate the word "racist" with certain political figures based on prior usage, especially if there's heavy political discourse around these terms. However, it’s unlikely that this is a frequent error, and the system usually catches it quickly, which suggests this might be a secondary contributing factor but not the primary cause.
Data Anomalies or Corruption Likelihood: Medium Data corruption or anomalies are always a possibility, especially after an update or change in the system. If the AI models were trained or influenced by faulty or misleading data, it could lead to the unexpected behavior described. Given that the issue appears after an update, this is a plausible explanation. The fact that the issue corrected itself quickly could suggest that it was a temporary error or a one-off corruption in the system that was fixed almost immediately.
Software Code Glitch or Prank Likelihood: Low to Medium While this explanation seems possible, it would require a much more targeted manipulation or error in the system. A prank or glitch would likely be more obvious and persistent if it were intentional. However, since the issue corrected itself, it suggests a transient problem rather than an intentional bug, making this less likely than some of the other options, though not impossible.
Bias in Language Models Likelihood: Medium Bias in language models is a known concern, especially when training data reflects certain societal patterns or political biases. Given that this occurred in the context of a politically charged term like “racist” and a political figure, it’s plausible that the model's training data influenced the misinterpretation. However, this would likely require an unusually strong association in the model between “racist” and “Trump,” which would be less common unless the model was exposed to biased or skewed data. Still, this remains a potential factor in the mix.
Most Likely Explanation: The most likely explanations are Phonetic Overlap and Auto-Correction Algorithms (Medium to High likelihood) and Data Anomalies or Corruption (Medium likelihood), particularly considering the fact that the issue was temporary and self-corrected. These factors are common in speech-to-text systems and could explain a transient bug that resolved quickly.
Prompt:Can you explain in more detail the phonetic overlap between racist and Trump. I don't hear it when I say the words
Answer:
Apple has acknowledged a bug in its iPhone dictation feature where the word "racist" was briefly transcribed as "Trump" before correcting itself. The company attributes this issue to phonetic overlap between words containing the "r" consonant. citeturn0news10
Phonetic overlap refers to the similarity in the sounds of different words. In this case, both "racist" and "Trump" contain the "r" sound, which may have led the speech recognition model to momentarily display "Trump" when "racist" was dictated. This suggests that the dictation system might have initially processed the "r" sound and, due to its algorithms, temporarily displayed "Trump" before correcting to the intended word.
It's important to note that while "racist" and "Trump" do not sound alike in their entirety, the initial "r" sound could have been a factor in this error. Apple is working on a fix to address this issue. citeturn0news10
navlistApple addresses dictation bug replacing 'racist' with 'Trump'turn0news10,turn0news11,turn0news12
Prompt:
So if I understand correctly - it hears the r word first and because maybe with the frequency with which people are using the name Trump - it's the first guess that comes up. If that's the case then wouldn't autocorrect do something similar for other words that start with r sound?
Answer
Yes, you're understanding it correctly. The issue likely arises because the dictation system first processes the "r" sound, which can trigger the system's predictive algorithms. Since “Trump” is a common word in certain contexts, especially politically, it may be one of the first suggestions the system provides as part of its predictive text process. Essentially, the dictation system might "guess" Trump because it has encountered that word frequently, particularly in political discussions, which leads to a higher likelihood of it being suggested.
Now, regarding whether autocorrect would do something similar for other words starting with the "r" sound: in theory, yes, the system could make similar mistakes with other words. However, several factors could prevent this from happening as frequently:
Frequency of Occurrence: "Trump" may be a unique case in that it's both a very recognizable name (with frequent appearances in political discourse) and tied to highly charged terms like "racist." Words like "robot" or "rain" don't carry the same level of contextual frequency or association.
Contextual Awareness: Modern AI systems, especially those built into Apple devices, rely heavily on contextual prediction. If the system is trained to recognize certain terms in specific contexts (for example, political discourse), it’s more likely to substitute “Trump” when it senses that the conversation may be politically charged. Words like "robot" or "racist" wouldn't trigger the same kind of predictive behavior because they aren't as commonly associated with one another.
Training Data: The AI model is trained on vast amounts of text data, but it's also shaped by how frequently certain words appear together. Words that are used together often (e.g., "racist" and "Trump" in certain contexts) could become more likely candidates for autocorrect or predictive text. Other words that start with "r" (e.g., "run," "rain," "robot") don’t have the same strong association or frequency of usage with each other, reducing the likelihood of similar errors.
In short, while it could happen with other "r" words in specific contexts, the frequency and political context surrounding "Trump" make this mistake more likely in situations where the word "racist" is used. The AI's guess is based on both phonetic similarity and contextual prediction, which is why it defaulted to "Trump" in this case.
Prompt
Wait a second, so it's about context. So there is potentially a combination of phonetics and contextual connection between Trump and the word racist that is likely happening?
Answer:
Yes, exactly! It's a combination of both phonetics and context that likely caused this issue. Here's how the two factors come into play:
Phonetics: When you say "racist," the system may first detect the "r" sound and look for possible matches in its database. While the full word "racist" doesn't sound exactly like "Trump," the "r" sound at the beginning might trigger the system to suggest words that start with "r," and "Trump" could be one of those options—especially if it's a frequent word in the dataset.
Context: This is the key part. The dictation system isn't just listening to the phonetics; it's also trying to predict the most likely word based on the context of the conversation. Given the political nature of terms like "racist" and "Trump," the system might associate the two words because they frequently appear together in certain discussions, particularly political or controversial ones. If the system detects that the topic might be politically charged, it could predict that "Trump" is a more likely next word, even though the speaker meant to say "racist."
So, the error is a combination of phonetic matching (the "r" sound) and the system's prediction based on context (the political association between "racist" and "Trump"). This dual influence is likely why the system made that substitution.
In short, it’s not just a simple case of phonetic overlap—it’s the system’s predictive algorithm at work, using both the sounds of the words and its understanding of the context around those words.
2 notes · View notes
govindhtech · 8 months ago
Text
Dell AI PCs: A Gateway To AI For Life Sciences Organizations
Tumblr media
AI in the Life Sciences: A Useful Method Using Computers.
For life sciences companies wishing to experiment with AI before making a full commitment, Dell AI PCs are perfect. The Dell AI PCs are revolutionary way to get started in the vast field of artificial intelligence, particularly for clients in the life sciences who are searching for a cost-effective way to create intricate processes.
The Dell AI PCs, GPU-enhanced servers, and cutting-edge storage solutions are essential to the AI revolution. If you approach the process strategically, it may be surprisingly easy to begin your AI journey.
Navigating the Unmarked Path of AI Transformation
The lack of a clear path is both an exciting and difficult part of the AI transition in the medical sciences. As it learn more about the actual effects of generative and extractive AI models on crucial domains like drug development, clinical trials, and industrial processes, the discipline continues to realize its enormous promise.
It is evident from discussions with both up-and-coming entrepreneurs and seasoned industry titans in the global life sciences sector that there are a variety of approaches to launching novel treatments, each with a distinct implementation strategy.
A well-thought-out AI strategy may help any firm, especially if it prioritizes improving operational efficiency, addressing regulatory expectations from organizations like the FDA and EMA, and speeding up discovery.
Cataloguing possible use cases and setting clear priorities are usually the initial steps. But according to a client, after just two months of appointing a new head of AI, they were confronted with more than 200 “prioritized” use cases.
When the CFO always inquires about the return on investment (ROI) for each one, this poses a serious problem. The answer must show observable increases in operational effectiveness, distinct income streams, or improved compliance clarity. A pragmatic strategy to evaluating AI models and confirming their worth is necessary for large-scale AI deployment in order to guarantee that the investment produces quantifiable returns.
The Dell AI PC: Your Strategic Advantage
Presenting the Dell AI PCs, the perfect option for businesses wishing to experiment with AI before committing to hundreds of use cases. AI PCs and robust open-source software allow resources in any department to investigate and improve use cases without incurring large costs.
Each possible AI project is made clearer by beginning with a limited number of Dell AI PCs and allocating skilled resources to these endeavors. Trials on smaller datasets provide a low-risk introduction to the field of artificial intelligence and aid in the prediction of possible results. This method guarantees that investments are focused on the most promising paths while also offering insightful information about what works.
Building a Sustainable AI Framework
Internally classifying and prioritizing use cases is essential when starting this AI journey. Pay close attention to data kinds, availability, preferences for production vs consumption, and choices for the sale or retention of results. Although the process may be started by IT departments, using IT-savvy individuals from other departments to develop AI models may be very helpful since they have personal experience with the difficulties and data complexities involved.
As a team, it is possible to rapidly discover areas worth more effort by regularly assessing and prioritizing use case development, turning conjecture into assurance. The team can now confidently deliver data-driven findings that demonstrate the observable advantages of your AI activities when the CFO asks about ROI.
The Rational Path to AI Investment
Investing in AI is essential, but these choices should be based on location, cost, and the final outcomes of your research. Organizations may make logical decisions about data center or hyperscaler hosting, resource allocation, and data ownership by using AI PCs for early development.
This goes beyond only being a theoretical framework. This strategy works, as shown by Northwestern Medicine’s organic success story. It have effectively used AI technology to improve patient care and expedite intricate operations, illustrating the practical advantages of using AI strategically.
Read more on Govindhtech.com
3 notes · View notes
aionlinemoney · 8 months ago
Text
The Impact of Artificial intelligence in Healthcare Industry
Tumblr media
Technology has always played an important role in healthcare, but the rise of Artificial Intelligence (AI) is bringing even bigger changes. From helping doctors diagnose diseases to improving patient care, AI is transforming the healthcare industry for the better. It’s making healthcare services more efficient, accurate, and personalized for each patient.
In this blog, we will take a closer look at how AI is used in healthcare, its benefits, and the challenges.
AI in Healthcare: A New Beginning 
AI in healthcare means using computers and smart programs to help doctors look at medical information and make better choices. AI can quickly go through a lot of data and find patterns that people might not see. This makes it really helpful for finding diseases.
Uses of AI in Healthcare:
Diagnostics and Early Detection:
AI is becoming a powerful tool in diagnosing diseases. Artificial intelligence in medical diagnosis can examine medical images like X-rays, MRIs, and CT scans with high accuracy. In some cases, AI can even spot diseases like cancer earlier than human doctors.
AI tools are also being developed to assess a person’s risk of diseases based on their genetics, lifestyle, and environment, making healthcare more personalized.
AI in Drug Discovery:
Finding new drugs is a long and expensive process. Artificial intelligence in medical diagnosis helps speed it up by predicting how different chemicals will interact with the body. This allows pharmaceutical companies to find new treatments faster.
During the COVID-19 pandemic, AI in healthcare was used to repurpose existing drugs to treat the virus. AI helped identify promising drugs quickly, shortening the usual timeline for research.
Virtual Health Assistants:
AI-powered virtual health assistants are now offering patients basic medical advice without the need to visit a hospital. These assistants can answer questions, remind patients to take medications, and help schedule appointments. They also reduce the workload on doctors.
Telemedicine, where doctors consult patients remotely, has become more popular, especially during the pandemic. AI-driven platforms allow doctors to diagnose and treat patients from a distance, making healthcare more accessible and convenient.
Robotics in Surgery:
AI in healthcare is helping doctors do delicate surgeries. These robots can do small, correct operations, which means patients heal faster.
One example is the Da Vinci Surgical System. It helps doctors perform complicated surgeries through tiny cuts, allowing patients to recover quicker and with better results.
Benefits of AI in Healthcare:
The uses of Artificial intelligence offer many benefits:
Increased Accuracy
Artificial intelligence has increased accuracy in the healthcare industry. AI can analyze large amounts of data quickly and accurately, leading to better and earlier diagnoses. This improves treatment outcomes and can save lives.
Personalized Treatments
AI allows for personalized medicine by analyzing a patient’s unique medical history, genetics, and lifestyle. This leads to more effective treatments tailored to individual needs.
Lower Costs
AI can help reduce healthcare costs by speeding up processes, reducing errors, and improving efficiency. Faster drug discovery and better patient management also save money.
Improved Patients Experience 
Virtual health assistants and telemedicine make healthcare more convenient for patients. They allow people to access medical advice and consultations from home, which is especially helpful for those in remote areas or with mobility issues.
Challenges:
Although AI is very promising in healthcare, there are some challenges:
Data Privacy and Security 
AI needs a lot of patient data to work, which raises concerns about keeping that data safe and private. It’s important to protect sensitive patient information as AI becomes more common in healthcare. This is the main challenge for machine learning in the healthcare industry.
Lack of Human Interaction 
While AI can help doctors, it cannot replace the personal care and understanding that human doctors provide. Some patients might feel that AI-driven care is too impersonal, so it’s important to keep a balance between Artificial intelligence speed and the human touch in the Healthcare Industry.
Regulatory Challenges
As AI develops quickly, governments and regulators must make sure it is safe and works well in healthcare. Creating clear rules for AI in healthcare is a complicated process that will take time. These are some challenges in the healthcare industry.
Conclusion 
AI is making big changes in healthcare. It helps doctors find diseases early, give personalized treatments, and make surgeries better. AI is changing every aspect of daily life to prepared in this era or to stay updated you should read AI related news and blogs
In the future, AI will likely become an even bigger part of healthcare, making care better and easier to get. AI isn’t here to replace doctors but to work with them, making healthcare smarter, faster, and better for patients everywhere.
2 notes · View notes