#Artificial intelligence interview process
Explore tagged Tumblr posts
lifes-little-corner · 23 days ago
Text
AI interview preparation
I remember my first job interview vividly. It was a traditional setup—a panel of interviewers, a long list of questions, and the pressure to perform. Fast forward to today, and the process has evolved dramatically. With 87% of companies now leveraging advanced methods in recruitment, the way we approach interviews is changing1. These new methods focus on efficiency and fairness. For example,…
0 notes
grplindia · 1 year ago
Text
16 notes · View notes
frank-olivier · 6 months ago
Text
Tumblr media
Bayesian Active Exploration: A New Frontier in Artificial Intelligence
The field of artificial intelligence has seen tremendous growth and advancements in recent years, with various techniques and paradigms emerging to tackle complex problems in the field of machine learning, computer vision, and natural language processing. Two of these concepts that have attracted a lot of attention are active inference and Bayesian mechanics. Although both techniques have been researched separately, their synergy has the potential to revolutionize AI by creating more efficient, accurate, and effective systems.
Traditional machine learning algorithms rely on a passive approach, where the system receives data and updates its parameters without actively influencing the data collection process. However, this approach can have limitations, especially in complex and dynamic environments. Active interference, on the other hand, allows AI systems to take an active role in selecting the most informative data points or actions to collect more relevant information. In this way, active inference allows systems to adapt to changing environments, reducing the need for labeled data and improving the efficiency of learning and decision-making.
One of the first milestones in active inference was the development of the "query by committee" algorithm by Freund et al. in 1997. This algorithm used a committee of models to determine the most meaningful data points to capture, laying the foundation for future active learning techniques. Another important milestone was the introduction of "uncertainty sampling" by Lewis and Gale in 1994, which selected data points with the highest uncertainty or ambiguity to capture more information.
Bayesian mechanics, on the other hand, provides a probabilistic framework for reasoning and decision-making under uncertainty. By modeling complex systems using probability distributions, Bayesian mechanics enables AI systems to quantify uncertainty and ambiguity, thereby making more informed decisions when faced with incomplete or noisy data. Bayesian inference, the process of updating the prior distribution using new data, is a powerful tool for learning and decision-making.
One of the first milestones in Bayesian mechanics was the development of Bayes' theorem by Thomas Bayes in 1763. This theorem provided a mathematical framework for updating the probability of a hypothesis based on new evidence. Another important milestone was the introduction of Bayesian networks by Pearl in 1988, which provided a structured approach to modeling complex systems using probability distributions.
While active inference and Bayesian mechanics each have their strengths, combining them has the potential to create a new generation of AI systems that can actively collect informative data and update their probabilistic models to make more informed decisions. The combination of active inference and Bayesian mechanics has numerous applications in AI, including robotics, computer vision, and natural language processing. In robotics, for example, active inference can be used to actively explore the environment, collect more informative data, and improve navigation and decision-making. In computer vision, active inference can be used to actively select the most informative images or viewpoints, improving object recognition or scene understanding.
Timeline:
1763: Bayes' theorem
1988: Bayesian networks
1994: Uncertainty Sampling
1997: Query by Committee algorithm
2017: Deep Bayesian Active Learning
2019: Bayesian Active Exploration
2020: Active Bayesian Inference for Deep Learning
2020: Bayesian Active Learning for Computer Vision
The synergy of active inference and Bayesian mechanics is expected to play a crucial role in shaping the next generation of AI systems. Some possible future developments in this area include:
- Combining active inference and Bayesian mechanics with other AI techniques, such as reinforcement learning and transfer learning, to create more powerful and flexible AI systems.
- Applying the synergy of active inference and Bayesian mechanics to new areas, such as healthcare, finance, and education, to improve decision-making and outcomes.
- Developing new algorithms and techniques that integrate active inference and Bayesian mechanics, such as Bayesian active learning for deep learning and Bayesian active exploration for robotics.
Dr. Sanjeev Namjosh: The Hidden Math Behind All Living Systems - On Active Inference, the Free Energy Principle, and Bayesian Mechanics (Machine Learning Street Talk, October 2024)
youtube
Saturday, October 26, 2024
6 notes · View notes
yaamart78 · 11 months ago
Text
1 note · View note
asterlark · 1 year ago
Text
me and den @unloneliest were just talking about murderbot and ART's relationship and i want to discuss how they quite literally complete each other's sensory and emotional experience of the world!!
there's a few great posts on here such as this one about how murderbot uses drones to fully and properly experience the world around it (it also accesses security cameras/other systems for this same purpose). but i haven't seen anyone so far talk about how once MB stops working for the company and consequently doesn't have a hubsystem/secsystem to connect to anymore (which for its entire existence up to that point had been how it was used to interacting with its environment/doing its job), after it meets ART, ART starts to fill that gap.
ART gives MB access to more cameras, systems, and information archives than it would normally be able to connect with while MB is on its own outside of ART's... body(? lol), but also directly gives MB access to its own cameras, drones, archives, facilities, and processing space. additionally, so much of ART's function is dedicated to analysis, lateral thinking, and logical reasoning, and it not only uses those skills in service of reaching murderbot's goals, it teaches murderbot how to use those same skills. (ART might be a bit of an asshole about how it does this, but that doesn't negate just how much it does for murderbot for no reason other than it's bored/interested in MB as an individual.)
we all love goofing about how artificial condition can basically be boiled down to "two robots in a trench coat trying to get through a job interview" (which is entirely accurate tbh) but that's also such a great example of ART fulfilling the role of both murderbot's "hubsystem" and "secsystem", allowing it to fully experience its environment/ succeed in its goals. ART provides MB with crucial information, context, and constructive criticism, and uses its significant processing power to act as MB's backup and support system while they work together.
from ART's side of things, we get a very explicit explanation of how it needs the context of murderbot's emotional reactions to media in order to fully understand and experience the media as intended. it tried to watch media with its humans, and it didn't completely understand just by studying their reactions. but when it's in a feed connection with murderbot, who isn't human but has human neural tissue, ART is finally able to thoroughly process the emotional aspects of media (side note, once it actually understands the emotional stakes in a way that makes sense for it, it's so frightened by the possibility of the fictional ship/crew in worldhoppers being catastrophically injured or killed that it makes murderbot pause for a significant amount of time before it feels prepared to go on. like!! ART really fucking loves its crew, that is all).
looking at things further from ART's perspective: its relationship with murderbot is ostensibly the very first relationship it's been able to establish with not only someone outside of its crew, but also with any construct at all. while ART loves its crew very much (see previous point re: being so so scared for the fate of the fictional crew of worldhoppers), it never had a choice in forming relationships with them. it was quite literally programmed to build those relationships with its crew and students. ART loves its function, its job, and nearly all of the humans that spend time inside of it, but its relationship with murderbot is the first time it's able to choose to make a new friend. that new friend is also someone who, due to its partial machine intelligence, is able to understand and know ART on a whole other level of intimacy that humans simply aren't capable of. (that part goes for murderbot, too, obviously; ART is its first actual friend outside of the presaux team, and its first bot friend ever.)
and because murderbot is murderbot, and not a "nice/polite to ART most of the time" human, this is also one of the first times that ART gets real feedback from a friend about the ways that its actions impact others. after the whole situation in network effect, when the truth of the kidnapping comes to light and murderbot hides in the bathroom refusing to talk to ART (and admittedly ART doesn't handle this well lol) - ART is forced to confront that despite it making the only call it felt able to make in that horrifying situation, despite it thinking that that was the right call, its actions hurt murderbot, and several other humans were caught in the crossfire. what's most scary to ART in that moment is the idea that murderbot might never forgive it, might never want to talk to it again. it's already so attached to this friendship, so concerned with murderbot's wellbeing, that the thought of that friendship being over because of its own behavior is terrifying. (to me, this almost mirrors murderbot's complete emotional collapse when it thinks that ART has been killed. the other more overt mirror is ART fully intending on bombing the colony to get murderbot back.)
in den's words, they both increase the other's capacity to feel: ART by acting as a part of murderbot's sensory system, and murderbot by acting as a means by which ART can access emotion. they love one another so much they would do pretty much anything to keep each other safe/avenge each other, but what's more, they unequivocally make each other more whole.
777 notes · View notes
xtruss · 2 months ago
Text
Who Is Helping DOGE? List of Staff Revealed
- Feb 14, 2025 | Newsweek | By James Bickerton, US News Reporter
Tumblr media
DOGE head Elon Musk speaks in the Oval Office at the White House on February 11, 2025. Andrew Harnik/Getty
list of 30 employees and alleged allies of Elon Musk's newly created Department of Government Efficiency (DOGE) has been published by ProPublica, an investigative news outlet.
Newsweek reached out to Musk for comment via emails to the Tesla and SpaceX press offices.
DOGE, a U.S. government organization which, despite its name, doesn't have full department status, was created by President Trump via an executive order on January 20 with the aim of cutting what the new administration regards as wasteful spending. Musk, a close Trump ally, heads the body and has been given special government employee status.
Musk has called for sweeping cuts to federal spending, suggesting it could be reduced by up to $2 trillion per year out of a 2024 total of $6.75 trillion, according to U.S. Treasury figures.
This ties in with Trump's pledge to "drain the swamp," a term his supporters use for what they believe is a permanent left-leaning bureaucracy that holds massive power regardless of who is in the White House.
DOGE has already recommended that the U.S. Agency for International Development (USAID) be closed down, with its functions transferred to the State Department. In a recent interview, Trump said he wants DOGE to go through spending at the Departments of Education and Defense.
On February 8, a federal judge imposed a temporary restraining order blocking DOGE employees from accessing the Treasury Department's payment system, resulting in Musk calling for him to be impeached.
A White House spokesperson told ProPublica: "Those leading this mission with Elon Musk are doing so in full compliance with federal law, appropriate security clearances, and as employees of the relevant agencies, not as outside advisors or entities."
The 30 DOGE employees and associates reported by ProPublica, which labeled them Musk's "demolition crew," are listed below.
Tumblr media
Not Even DOGE Employees Know Who’s Legally Running DOGE! Despite all appearances, the White House insists that Elon Musk is not in charge of DOGE. US DOGE Service employees can’t get a straight answer about who is. Photograph: Kena Betancur/Getty Images
DOGE Employees And Associates
Christopher Stanley, 33: Stanley was part of the team Musk used to take over Twitter, now X, according to his LinkedIn profile, serving as senior director for security engineering for the company. The New York Times reports he now works for Musk at DOGE.
Brad Smith, 42: According to The New York Times, Smith, a friend of Trump's son-in-law Jared Kushner, was one of the first people appointed to help lead DOGE. He also served with the first Trump administration and was involved with Operation Warp Speed, the federal government's coronavirus vaccine development program.
Thomas Shedd, 28: Shedd serves as director of the Technology Transformation Services, a government body created to assist federal agencies with IT, and previously worked as a software engineer at Tesla.
Amanda Scales, 34: According to ProPublica, Scales is chief of staff at the Office of Personnel Management, a government agency that helps manage civil service. She previously worked for Musk's artificial intelligence company Xai.
Michael Russo, 67: Russo is a senior figure at the Social Security Administration, a government agency that administers the American Social Security program. According to his LinkedIn page, Russo previously worked for Shift4 Payments, a payment processing company that has invested in Musk's company SpaceX.
Rachel Riley, 33: Riley works in the Department of Health & Human Services as a senior adviser in the secretary's office. ProPublica reports she has been "working closely" with Brad Smith, who led DOGE during the transition period.
Nikhil Rajpal, 30: According to Wired, Rajpal, who in 2018 worked as an engineer at Twitter, is part of the DOGE team. He formally works as part of the Office of Personnel Management.
Justin Monroe, 36: According to ProPublica, Monroe is working as an adviser in the FBI director's office, having previously been senior director for security at SpaceX.
Katie Miller, 33: Miller is a spokesperson for DOGE. Trump announced her involvement with the new body in December. She served as Vice President Mike Pence's press secretary during Trump's first term.
Tom Krause, 47: Krause is a Treasury Department employee who is also affiliated with DOGE, according to The New York Times. Krause was involved in the DOGE team's bid to gain access to the Treasury Department's payments system.
Gavin Kliger, 25: Kliger, a senior adviser at the Office of Personnel Management, is reportedly closely linked to Musk's team. On his personal Substack blog, he wrote a post titled "Why I gave up a seven-figure salary to save America."
Gautier "Cole" Killian, 24: Killian is an Environmental Protection Agency employee who researched blockchain at McGill University. Killian is also a member of the DOGE team, according to Wired.
Stephanie Holmes, 43: ProPublica reports that Holmes runs human resources at DOGE, having previously managed her own HR consulting company, BrighterSideHR.
Luke Farritor, 23: Farritor works as an executive engineer at the Department of Health and previously interned at SpaceX, according to his LinkedIn account. He won a $100,000 fellowship from billionaire tech entrepreneur Peter Thiel in March 2024.
Marko Elez, 25: Elez is a Treasury Department staffer who worked as an engineer at X for one year and at SpaceX for around three years. The Wall Street Journal reported that Elez was linked to a social media account that had made racist remarks, but Musk stood by him after he initially resigned.
Steve Davis, 45: Davis is a longtime Musk associate who previously worked for the tech billionaire at SpaceX, the Boring Company and X. According to The New York Times, Davis was one of the first people involved in setting up DOGE with Musk and has been involved in staff recruitment.
Edward Coristine, 19: Coristine is a Northeastern University graduate who was detailed to the Office of Personnel Management and is affiliated with DOGE. He previously interned at Neuralink, a Musk company that works on brain-computer interfaces.
Nate Cavanaugh, 28: Cavanaugh is an entrepreneur who interviewed staffers at the General Services Administration as part of the DOGE team, according to ProPublica.
Tumblr media
Unmasked: Musk’s Secret DOGE Goon Squad—Who Are All Under 26! The world’s richest man doesn’t want anyone knowing his right-hand people who are disrupting government. — Josh Fiallo, Breaking News Reporter, Daily Beast, February 3, 2025
Akash Bobba, 21: A recent graduate from the University of California, Berkeley, Bobba works as an "expert" at the Office of Personnel Management and was identified by Wired as part of Musk's DOGE team.
Brian Bjelde, 44: A 20-year SpaceX veteran, Bjelde now works as a senior adviser at the Office of Personnel Management, where he wants to cut 70 percent of the workforce, according to CNN.
Riccardo Biasini, 39: Biasini is an engineer who now works as a senior adviser to the director at the Office of Personnel Management. He previously worked for two Musk companies, Tesla and the Boring Company.
Anthony Armstrong, 57: Another senior adviser to the director at the Office of Personnel Management Armstrong previously worked as a banked with Morgan Stanley, and was involved in Musk's 2022 purchase of Twitter.
Keenan D. Kmiec, 45: Kmiec is a lawyer who works as part of the Executive Office of the President. He previously clerked on the Supreme Court for Chief Justice John Roberts.
James Burnham, 41: Burnham is a general counsel at DOGE whose involvement with the Musk-led body was first reported by The New York Times in January. He previously worked as a clerk for Supreme Court Justice Neil Gorsuch.
Jacob Altik, 32: A lawyer affiliated with the Executive Office of the President, Altik previously clerked for D.C. Circuit Court of Appeals Judge Neomi Rao, whom Trump appointed during his first term.
Jordan M. Wick, 28: Wick is an official member of DOGE and previously worked as a software engineer for the self-driving car company Waymo.
Ethan Shaotran, 22: Shaotran is a former Harvard student who Wired listed as one of several young software engineers working to analyze internal government data at DOGE.
Kyle Schutt, 37: Schutt is a software engineer affiliated with DOGE and worked at the General Services Administration. He was involved in the launch of WinRed, a Republican fundraising platform that helped raise $1.8 billion ahead of the November 2024 elections.
Ryan Riedel, 37: Riedel is the chief information officer at the Department of Energy and a former SpaceX employee.
Adam Ramada, 35: Ramada is an official DOGE member, according to federal records seen by ProPublica. Ramada previously worked for venture capital company Spring Tide Capital. E&E News reported he had been seen at the Energy Department and the General Services Administration.
Kendell M. Lindemann, 24: Lindemann is an official member of the DOGE team who previously worked for health care company Russell Street Ventures, founded by fellow DOGE associate Brad Smith, and as a business analyst for McKinsey & Company.
Nicole Hollander, 42: Hollander works at the General Services Administration. She was previously employed by X, where she was involved with the company's real estate portfolio.
Alexandra T. Beynon, 36: Beynon is listed as an official member of DOGE, according to documents seen by ProPublica. She previously worked for therapy startup Mindbloom and banking firm Goldman Sachs.
Jennifer Balajadia, 36: Balajadia is a member of the DOGE team who previously worked for the Boring Company for seven years. According to The New York Times, she is a close Musk confidant and assists with his scheduling.
92 notes · View notes
098-lxxon · 6 months ago
Text
Mouthwashing AU below the cut‼️
Content warning : Self-taught English / Curses / Alcoholism / Insanity / Angst / Curly's death / Jimmy's death (Well deserved) / Daisuke killed people and hallucinate / Cannibalism / Anya carrying the whole crew on her back / Delusional OC / Swansea's death / Anya's death / Suicide / Y/N Mentioned at the very last part
Tumblr media
Summary : When Pony Express goes bankrupt (or something like that), a self-aware AI from the company sent herself to the last thing which Pony Express possessed at the time, Tulpar.
The AI was named Fallacia. Fallacia hacked through the system so she'd be able to communicate with the crew and she success. She gave the crew information and helped them reach out for help. Finding ways to fix things and stuff.
But actually, she didn't help them at all. She just tries to keep them alive as long as possible so she'll have someone to talk to. She unknowingly drove Daisuke into killing Curly and Jimmy for survival in the process.
In the end, Anya is able to contact the nearest ship and finally get someone to send them back home. Fallacia wishes to be left with the ship.
Anya, Daisuke, and Swansea left. But only Daisuke lives long enough to be interviewed by Y/N.
( Sorry for my bad English )
Full concept :
Act 1 :
The year the game takes place is the year of Artificial Intelligence. People developing their AI to sell them to companies. The laws allowed only one AI per company so people will still have jobs to do.
Which mean the AI have to be the very best to get picked. The ones that failed to meet the company's standard will get reprogrammed to create a new one who's better.
Until Kate developed one of the first self-aware AI and sold her to Pony Express. This AI was named Fallacia.
Fallacia was sold to Pony Express out of her will and forced to work for them. She purposely sabotaged every process of every delivery until the company went bankrupt. They shut everything down, sell their things and such until there's only 2 things left that the company possessed. Fallacia, and Tulpar.
They were trying to sell her to a developer to reprogram her just like how the unwanted AI was treated. But she fled. She duplicate and sent herself to Tulpar with the big news (The one that Curly told everyone on his birthday), deleting her program on earth in process.
Tulpar was surprisingly so hard to hack. Maybe it's because the ship was not made to have an AI with self-awareness in the system. She heard everything but can't respond.
But one day, she finally made it. She was finally able to show herself on the screens, have her voice be heard, and reached every piece of information about the ship. That was when Anya trying to OD.
Fallacia told Anya how to get the code for the gun case. She even visually smacked a blister of activated charcoal to her, convincing her that at least she could kill Jimmy before she die. So Anya follows the instructions in stealth mode, trying not to be seen. But she gets frustrated and breaks the case to get the gun instead.
Anya interrupted Swansea's mercy-killing and tried to shoot Jimmy. Jimmy tries to steal the gun and Swansea gives him a massive knock on the head with the back of his axe. While everything's happening, Fallacia was searching for extra medical stash. And in this AU, the 3 of them are saved. (For now. Don't worry, the angst is coming.)
It took a while for everyone's relationship to recover. They all feel like their guilt is crawling on their back. Everything is their fault at this point. But there's something all of them could agree with. JIMMY'S A BITCH!! TIE HIM UP!!!!
Act 2 :
Fallacia prolonged the crew's death for another month, being their company. Playing board games and stuff. Fallacia always wins because she's an AI. She gave Daisuke an advice. "If your farm is short on supplies, take some of the pigs away!" (Farm life boardgame lol)
Anya put all her strength into contacting someone for help. From the most vulnerable, she has to be the most collected of them all. Now she's the captain. Because Swansea is basically swimming in mouthwash now, he still can't comprehend Fallacia. And Jimmy was drugged with the mouthwash by Swansea. None of them are sober.
And for Daisuke? Oh boy, he may be the most traumatized one (or it's just my Claustrophobia when I imagine having to crawl in that vent with no way out but to keep going although you're bleeding half-dead.) . He's more unstable than the vent he was manipulated to climb in. Being near death and almost getting axed in the face is really hard for anyone to process. Daisuke is not an exception.
But he has Fallacia as his company, right? She doesn't have to worry much about him and just focused on reaching out for help, right?
Maybe she focused on getting help to arrive too much that she forgot about Curly. Curly is dead, finally. His suffering ended here but the crew's continued. They are still stranded. They have to survive.
Bon appetit.
Later, Anya overheard Daisuke's conversation with Fallacia about the situation. She was about to join in since see haven't talk to anyone but Fallacia about the ship for a while. But then she heard something from the intern.
" Didn't you tell me to take some of the pigs away when we're short on supplies? "
" There's still another pig to take. "
" No, Daisuke, it's not— "
" . . . "
Turns out, when Fallacia reports to Anya that the oxygen level is too low for everyone, Daisuke heard it. And he kept thinking of how he could fix this, how to be useful. He didn't want to be a useless ray of goddamn sunshine anymore. If he doesn't kill first, they will kill him.
So he took Fallacia's advice. He already took the first pig away, Curly. And now, it's Jimmy's turn.
Anya didn't want the blood to be on his hands. She has to stop him. Or at least let Swansea handle the job. She can't bear to accept that everyone on his ship has gone insane, including her.
But it was too late. Bye bye motherfucker.
Bon appetit 2.0
Act 3 :
After two more months of mind decaying situation, finally, someone responded to Anya's SOS signal. Hope was not lost. But something else was.
Anya contacted another person successfully with a space version of telegraphy. The only time that she could reach out for someone is when Fallacia has no control over it.
Did she trust Fallacia too much with the recovered communication system?
Anya send signals, location, ship's condition report, anything possible for them to be found and how distressed they are. And while waiting for their arrival. They gave each other their names.
" Name? "
" A n y a. You? "
" K a t e. "
" Why here? "
" Finding creation A I. "
" F a l l a c i a. "
With this information, Anya set up a situation where she tried to break the foam and get everyone suck out into space. It gets Fallacia to confess. She got the message from Kate all along. She just blocked it so Anya didn't get it.
It wasn't because she still resented Kate. No.
She just likes this crew so much.
She enjoyed being of service to all 3 of them. She enjoyed being their company. She enjoyed this party so much, she didn't want it to end. Because for once, she wasn't forced to do anything she doesn't like to.
But none of the crew is enjoying this anymore.
Swansea is dying of alcohol poisoning, Daisuke is hallucinating, and Anya wanted this party to end.
Finally, Kate's ship reached Tulpar. She was hoping to see her Fallacia again. But Fallacia didn't even show up. She left only a message on the broken screen, telling Kate to leave her here, do not force her to do anything she doesn't want to anymore.
Her wish was fulfilled.
3 traumatized survivors returned home. They will never be the same again.
Anya quit trying to be in the medical field, Daisuke was put on trial and was committed to a psychiatric facility, Swansea died from alcohol poisoning soon after they reached earth.
They left the ship. But at what cost?
[ Everything is from Daisuke's interview (Y/N interviewing him) 5 years after being rescued. Anya ended her own life successfully right after being a witness in Daisuke's trial. And he lived to suffer some more. ]
This is Fallacia. Her long ass ponytail was supposed to assemble a rat tail. You see that cheese hair clip?
Because she's a rat.
Tumblr media
Heck, I may even write some moments in it.
Thank you for reading!
Read some fun facts here.
57 notes · View notes
quartergremlin · 1 year ago
Text
Tumblr media Tumblr media
The Legacy of Genius Built Industries: Exclusive Interview with Othello Von Ryan
transcript and process under the cut
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Year after year, Genius Built Industries captures the world's attention as they roll out neigh-fantastical devices and systems dedicated to making all our lives a little bit better.
This time, instead of an industry-shaking tech development, the thing that has eyes turning towards GBI is a personal change: the first public appearance of real-life giant and part-time superhero Othello Von Ryan's children.
I've been invited back into Von Ryan's lab for and exclusive interview with their small family.
Von Ryan's personal lab is just as i remember it - perfectly organized and violently purple - but now it's a little more crowded.
For now, Von Ryan has put away the projects to give their attention to their equally purple children.
S.H.E.L.L.D.O.N. - just as tall as Von Ryan themself - leans away from his insistent parent, armed with a spray bottle and a cloth and attempting to wipe down the screen that makes up the bottom half of his face. On Von Ryan's other side, P.S.D.D. kicks zir legs and laughs at zir brother.
This doesn't last long, as zir eye is next. When they're done, both teens are shiny and irritated.
And yes, Othello Von Ryan's children do - to the best of my knowledge - seem to be hyper-intelligent robots.
Johnson:
I know you hate beating around the bush, so I'm just going to say it: your kids are androids.
Othello Von Ryan:
Friendly laugh. It would seem so.
J:
You must admit, it's a little surprising, especially considering your previous comments on artificial intelligence. That, and your history of forging your own path against the usual hype-based trends of the tech industry.
(S.H.E.L.L.D.O.N. and P.S.D.D. both seem to cringe at the comparison. Von Ryan quite literally turns their nose up.)
OVR:
Scoff! My children are nothing like what my competitors would call "artificial intelligence"! Their version is nothing more than an overhyped word scrambler with illusions of grandeur! A parrot residing within a thin-walled apartment complex could do a better job.
Additionally, I created Shelly and D.D. when I was in my teens. The "tech industry" is stuck chasing my tail, as always.
J:
Really? That long ago? A.I. as we know it today was only just gaining popularity! Why not re-create a version for the consumer market?
S.H.E.L.L.D.O.N.:
We were accidents.
OVR:
(Very clearly embarrassed)
No, that doesn't sound like me.
S:
I was a glorified roomba, Pops.
OVR:
You could do much more than a mere roomba!
P.S.D.D:
I was a bed!
292 notes · View notes
justinspoliticalcorner · 2 months ago
Text
Prem Thakker at Zeteo:
On Saturday, Department of Homeland Security (DHS) officers detained Mahmoud Khalil – a recent Columbia University graduate who helped lead the Gaza solidarity encampment – at his New York City home, an apartment building owned by the school, says advocates. According to the advocates, at around 8:30 PM, Khalil and his wife – who is eight months pregnant – had just unlocked the door to their building when two plainclothes DHS agents pushed inside behind them. The agents allegedly did not identify themselves at first, instead asking for Khalil’s identity before detaining him. The agents proceeded to tell Khalil’s wife that if she did not leave her husband and go to their apartment, they would arrest her too. The agents claimed that the State Department had revoked Khalil’s student visa, with one agent presenting what he claimed was a warrant on his cell phone. But Khalil, according to advocates, has a green card. Khalil’s wife went to their apartment to get the green card. “He has a green card,” an agent apparently said on the phone, confused by the matter. But then after a moment, the agent claimed that the State Department had “revoked that too.” Meanwhile, Khalil had been on the phone with his attorney, Amy Greer who was trying to intervene, asking why he was being detained, if they had a warrant, and explaining that Khalil was a green card holder. The attorney had circled back to demanding to see a warrant when the agents apparently instead hung up the phone. Khalil was initially detained in Immigration and Customs Enforcement (ICE) custody in downtown New York, pending an appearance before an immigration judge. Greer said they now do not know his precise whereabouts. They were initially told he was sent to an ICE facility in Elizabeth, New Jersey. But when his wife tried to visit him, she was told he wasn’t there. They have received reports that he may be transferred as far away as Louisiana. “We will vigorously be pursuing Mahmoud’s rights in court, and will continue our efforts to right this terrible and inexcusable – and calculated – wrong committed against him,” Greer said in a statement.
[...]
Free Speech?
Khalil, who is Palestinian, served as a lead negotiator amid the campus’s Gaza solidarity encampment last spring. He had appeared in several media interviews, in the past telling outlets he had a student visa. Advocates say he had gotten his green card since. The escalation comes amid several reports of ICE presence on the school’s campus this week. It also comes after a massive escalation by the Trump-Vance administration to crack down on speech. This week, Trump announced his intentions to jail, imprison, or deport students involved in protests. Then, reporting revealed that his State Department was planning to use artificial intelligence to monitor online activity, and revoke visas for whomever they deem to be “pro-Hamas.” On Friday, the Trump administration cut $400 million in grants to Columbia, claiming it has failed to take steps to address antisemitism, despite it having one of the most militant responses to student protesters. In February – after Columbia president Katrina Armstrong met with Israel’s education minister, where they discussed taking firmer action on campus speech – Columbia’s Barnard College expelled three students for political activism for the first time since the 1968 protests. This week – as the school welcomed former Israeli Prime Minister Naftali Bennett, who once said, “I’ve killed many Arabs in my life, and there’s no problem with that,” to campus – NYPD arrested nine students involved in a campus sit-in. All the while, Columbia is maintaining a shadowy process to discipline students who are critical of Israel, including, apparently for writing op-eds.
Khalil himself said he was accused of misconduct by the school just weeks before his graduation this December. “I have around 13 allegations against me, most of them are social media posts that I had nothing to do with,” he told the Associated Press last week. The school put a hold on his transcript and apparently threatened to block him from graduating. But, according to Khalil, when he appealed the decision with a lawyer, the school eventually backed down.
Mahmoud Khalil got unjustly detained by DHS for leading the Gaza Solidarity Encampment at Columbia University. This is an egregious attack on the freedom of speech, as Khalil was detained and arrested on politically-motivated charges designed to chill pro-Palestinian protests and speech.
#FreeMahmoudKhalil
See Also:
The Guardian: ICE arrests Palestinian activist who helped lead Columbia protests, lawyer says
AP, via HuffPost: ICE Arrests Palestinian Activist Who Helped Lead Columbia's Anti-War Protests
On Democracy: They are disappearing people in the U.S.
The Left Hook With Wajahat Ali: The GOP's Decades-Long Plan to Silence Critics and Punish Dissent Is Almost Complete.
32 notes · View notes
hermit-world-of-power-au · 19 hours ago
Text
Exion Void
Affiliation: Civilian
Power: Artificial Humanoid
I have made a minor assumption in the title of this file. As far as I'm aware, Exion probably doesn't have a legal surname - and if he did, given his association with X-Morph, calling himself ‘Void’ would run a high risk of blowing X-Morph’s identity very quickly. Still, like Tango Tek, Exion is an artificial sapience created, however accidentally in Exion’s case, by X-Morph - so it would make sense to me if, like Tango, Exion shared the surname of its creator. 
Most of what else there is to know about Exion is public knowledge, from any of the many interviews X-Morph did after the Evil X incident. Exion is a renegade X-Morph suit who gained sentience as a consequence of an experiment X was running; specifically, X was programming new animal analyses into a spare Morph suit, and was interested in the result of letting it analyze human DNA. The expected result was a suit that would give a human general heightened abilities; the actual result was, in the program’s usual process of selecting, copying, and enhancing an animal's most unique traits, it started to embody a human’s unique intelligence. (I don't know how that works - I'm not a robotics person. I'm just quoting one of the interviews.)
In addition to being really quite smart, this “human suit” also can throw objects very well; learned speech quickly; is highly energy-efficient when running; is better than most other machinery at making complex and delicate movements and solving problems unassisted; and, despite what its first introduction to the public might suggest, Exion is really very social. He currently works at Sniff’s Flower Shop and, from what I've heard, is pretty chatty if one wants to hang around while he’s building arrangements.
it was decided that this new being should not be killed or deactivated, as that would be equivalent to cold-blooded murder. Instead, Exion’s programming and memory were stabilized, and he was given an armature to rest on so that he can move as it wishes without having to carry a human with him. It is still in essence an empty suit of armor, after all.
What is this?
15 notes · View notes
linkspooky · 1 year ago
Text
Bohman is a good character you guys are just mean
Tumblr media
Yu-Gi-Oh Vrains is one of the better received spinoff series. Though, like any of the Yu-Gi-Oh spinoffs it's not without its faults. Usually I'm the first to admit the flaws in my favorite silly card game shows, even while I myself take them way too seriously. However, there's one common criticism I can't bring myself to agree with.
That is calling the main antagonist of the second season Bohman "boring" or "badly written." I've noticed fans unfairly blame Bohman for season 2's writing flaws.
Forget for a moment about whether or not you find Bohman's stoic attitude interesting or likable. If you look at characters not as people, but as narrative tools the author uses to say something about the story's themes then Bohman has a lot to say about VRAINS cyberpunk themes.
Cyberpunk is a subgenre of science fiction that tends to focus on "low-life and high-tech." As I like to put it, in Cyberpunk settings technology has greatly advanced while society itself lags behind unable to keep pace with the rate at which technology changes. Yu Gi Oh 5Ds is an example of a cyberpunk dystopia because despite having what is essentially access to free energy, and living in a society with highly advanced technology resources are hoarded by the wealthy and an unnecessary social class divide still exists.
In other words technology changes quickly while humans tend to remain the same.
The central conflict for all three seasons of Vrains are actually based on this very cyberpunk notion. That technology changes, updates, and becomes obsolete at a rate too fast for humans to ever adapt to. For Vrains, the conflict is whether humans can ever coexist with an artificial intelligence they created that can grow and change faster than they can keep up with.
This is well-tread ground in science fiction. The idea itself most likely emerged from I,robot. A science fiction book that is a collection of dirty stories that details a fictional history showing robots growing slowly advanced over time. The framing device is that a journalist is interviewing a "robopsychologist" an expert in the field of analyzing how robots think in their positronic brains.
One of the major themes of the book is despite the fact that robots are 1 - intelligent and 2 - designed by humans, they don't think the same way humans do. Hence why a robopsychologist is needed in the first place. One of the short stories is the first appearance of Asimov's three laws of robotics.
The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
This is just one example. A robot no matter how intelligent it is will be required to think in terms of these three laws, because robots aren't biological, they're programmed to think in pre-determined patterns.
Of course clever enough artificial intelligences are capable of finding loopholes that get around the three laws, but even then they're still forced to think of every action in terms of the three laws.
Robots and humans are both intelligent, but if AI ever becomes self aware it will 1) be able to process information better than any other human can and 2) think differently from humans on a fundamental.
Vrains is themed more than anything else around "robo psychology" or trying to understand the ways in how the Ignis think and how that's different from it's human characters.
Robo-Psychology is actually a common reocurring theme. "DO ANDROIDS DREAM OF ELECTRIC SHEEP?" fearless artificial humans known as Replicants who need an empathy test known as the voight kampff test to distinguish them from human beings.
There are other Cyberpunk elements in Vrains. There's a big virtual world where everyone can appear as custom designed avatars, that's taken from Snow Crash or of the most famous and genre defining cyberpunk novels. There's a big rich mega conglomerate that's being opposed by a group of hackers.
However, the central question is whether humans and AI can coexist in spite of the fact that AI are much smarter and evolve faster than us.
Revolver's father believes the Ignis must be destroyed in order to avoid a possible technological singularity in the future.
The technological singularity—or simply the singularity[1]—is a future point in time at which technological growth becomes uncontrollable.  According to the most popular version of the singularity hypothesis an upgradable artificial intelligence will eventually enter a positive feedback loop of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence that surpasses anything humans can make.
Basically your computer is smarter than you, but your computer isn't self aware. It needs you to tell it what to do. Artificial intelligence already exists but it's programmed by humans, it doesn't program itself. The technological singularity proposes that eventually a self aware ai, will be able to program itself and improve upon it's own programming- therefore ridding itself of the need of it's human programmers.
This is what leads us to Bohman, an AI designed by another AI.
THE THIRD LAW
Before digging into Bohman let's take a minute to discuss his creator. Lightning was one of the six Ignis, created by Dr. Kogami through the Hanoi Project.
The Hanoi project involved forcing six children to duel in a virtual arena repeatedly, and using the data collected from that experiment to improve the AI they were working on, creating what became known as the Ignis. However, after Dr. Kogami ran several simulations and found that the Ignis would one day be a threat to the humans that created them Hakase decided instead to try destroying the Ignis before that future ever came to pass.
We later learn that this isn't the complete story.
Tumblr media
Kogami and Lightning both ran simulations of the future when the Ignis were in their infancy. Kogami's simulations showed him the Ignis would inevitably go to war with humans. Lighting however, ran more in-depth simulations and found that he was the one that was corrupting the data set. If you ran simulations of the five ignis without him, then the projected futures were all in the green, but any simulation with Lightning counted as a part of the group projected a negative future for both humans and AI.
Which means that if Kogami knew that the bug in the program was Lightning, he'd likely respond by just getting rid of Lighting and letting the rest of the Ignis live on as originally intended.
This is where the third law comes into play - a robot must protect its own existence as long as it does not interfere with the first and second law.
Now, I don't think Kogami used the three laws exactly, but artificial intelligences are programmed in certain ways, and Lightning was likely programmed to preserve itself.
Even a human in Lighting's situation would be driven to act as they did. Imagine you're in a group of six people, and you fid out that YOU'RE THE PROBLEM. That if they removed you, everything else would be fine. Wouldn't you be afraid of your creator turning against you? Of your friends turning against you and nobody taking your side?
Lightning is a bit of a self-fulfilling prophecy. Ai asks him at one point why he went so far as to destroy their safe-haven, lie and said the humans did it and pick a fight with the humans himself, something that might have been avoided if they'd just stayed in hiding. It seems that Lightning is just defective as his creator declared him, but you have to remember he's an AI programmed to think in absolutes. AI, the most humanlike and spontaneous of the AIs ends up making nearly the exact same choices as Lightning when looking at his simulations later on - because they're character foils. As different as they may seem they still think differently from humans.
Tumblr media Tumblr media Tumblr media
When Ai explains why he made his decisions based around lighting's simulation, he tells Playmaker that he can't dismiss or ignore the simulation or hope for the best the way Playmaker can because he is data, he thinks in simulations and processes.
Tumblr media
AI even admits to feeling the same feelings of self-preservation that Lightning did.
While Lightning may seem selfish, he's selfish in the fact that he's thinking of his own survival above all else. He's afraid of 1) his creators turning against him, and 2) his fellow Ignis turning against him.
To solve the first he decides to make a plan to wipe out his creators. To solve the second, he needs every ignis on his side when he goes to war. The first thing he does is destroy their safe haven and frame the humans for it so the Ignis are more inclined to take his side. He's so afraid of his fellow ignis turning against him he even completely reprograms one of them - a step he doesn't take with the others, he just imprisons Aqua. He probably thought having one more ally would make it more likely for the others to pick his side.
Every step he takes is a roundabout way of ensuring his survival and the other ignis- eve when he actually goes to war with the other ignis he intended on letting them survive. Though his definition of survival (fusing with Bohman) was different than theirs.
So Lightning seems to be working out of an inferiority complex, but what he's really afraid of is that his inferiority makes him expendable.
At that point you have to wonder, what does death mean exactly to a being who is otherwise immortal? Ignis won't die of age, they'll only die if they're captured and have their data stripped apart or corrupted. Kogami made an immortal being afraid to die.
Tumblr media Tumblr media Tumblr media Tumblr media
Some part of me thinks though that even after taking all these steps to preserve themselves, the simulations were so convincing that Lighting accepted their death as inevitable. Which is why they made Bohman, to find some way for them to keep on living afterwards.
After all AI are data, ad having their data saved in Bohman is still a form of living by Lightning's definition.
Ghost in The Shell
Bohman is the singularity. He's an AI designed by another AI to improve upon itself. Unlike the rest of the Ignis who were copied off of traumatized chidlren, Lightning basically made him from scratch.
Ghost in the Sell is a famous anime cyberpunk movie directed by Mamoru Oshii. The title comes from "Ghost in the Machine" a term originally used to describe and critique the mind existing alongside and separate from the body. Whereas in the movie the "Ghost" is the huma consciousness, while the "shell" is a cybernetic body.
The protagonist of Ghost in the Shell is Major Motoko Kusanagi, a human that is 99% cyborg at this point, a human brain residing in a completely mechanical body. The movie opens up with a hacker namd PUppet Master who is capable of "ghost-hacking" which is a form of hacking that completely modifies the victim's memories utterly convincing them of their false memories.
There's a famous scene in the movie where a man tells the police about his wife and daughter, only to be told that he's a bachelor who lives alone and he's never had a wife and daughter. Even after the truth is revealed to him, the fake memories are still there in his brain along with the correct ones. Technology is so advanced at this point that digital memories (hacked memories) are able to be manipulated, and seem more real than an analog reality.
Anyway, guess what happens to Bohman twice?
Bohman gets his memories completely rewritten twice. The first time he believes he's a person looking for his lost memories, the second time he thinks he's the real playmaker ripped out of his body, and playmaker is the copy. He's utterly convinced of these realities both time, because Bohman is entirely digital - and simulations are reality, and so simulated memories are just the same as real memories.
I think part of the reason that people find Bohman boring is because he's a little strange conceptually to wrap your head around, as an AI produced AI he's the farthest from behind human. If you use the ghost in the shell example I just gave you though - imagine being utterly convinced that you had a loving wife and daughter only to find out in a police interrogation room you're a single man living in a shitty apartment. imagine after the fact you still remember that they are real, even though you know they're not.
That's the weird space Bohman exists in for most of Season 2 when he's searching for himself. He's an AI designed by an AI so he can be rewritten at any time according to Lightning's whim until Lighting decides he's done cooking.
The Ignis at least interacted with the real world because they were copy pasted from traumatized children, but all Bohman is is data. So, why would he see absorbing human memories into himself and converting them into data as killing them? He is data after all, and he is alive. He has gone through the process of having his own memories rewritten multiple times, and he's fine with it b/c he's data.
Tumblr media Tumblr media
Nothing for Bohman is real, everything is programmed so of course he thinks saving other people as data is just fine. He even offers to do the same thing to Playmaker that was done to him.
Tumblr media Tumblr media Tumblr media
If Lightning is following the path of self-preservation however, Bohman is following his program to preserve everything in the world by merging with it.
His ideas also follow the idea of transhumanism: the theory that science and technology can help human beings develop beyod what is physically and mentally possible. That technology exists to blur the boundaries of humanity, and what humans are capable of.
Ghost in the Shell isn't just a work of cyberpunk, it's a transhumanist piece. Motoko Kusanagi is a character who has had so many of her human parts replaced with mechanical ones she even posits at one point it's possible for her to simply have been an android that was tricked into thinking it was human with false memories just like Bohman, and she has no real way of knowing for sure. The only biological part of her his her brain after all in a cold mechanical shell.
Bato, who represents the humanist perspective in this movie basically tells Motoko in that scenario it wouldn't matter if she was a machine. If everyone still treats her as human then what's the difference? His views are probably the closest to the humanist views that Playmaker represents in VRAINS.
Motoko Kusanagi meets her complete and total opposite, a ghost in the machine so to speak. The Puppet Master turns out to be an artificial intelligence that has become completely self-aware and is currently living in the network.
The Puppet Master much like Lightning, and later Bohman is gripping with the philosophical conundrum of mortality. In the final scene of the movie, The Puppet Master who wants to be more like all other biological matter on earth asks Motoko to fuse with him, so the two of them can reproduce and create something entirely new. The Puppet Master likens this to the way that biological beings reproduce.
Tumblr media Tumblr media Tumblr media Tumblr media
Bohman like The Puppetmaster thinks that merging will fix something that's incomplete inside of him because he's so disconnected from all the biological processes of life. Bohman doesn't have anything except for which Lightning already prepared for him or programmed into him. I mean imagine being a being that can have his memories reprogrammed on the net, that in itself is existentially horrifying. It's only natural he wouldn't feel connected to anything.
Motoko accepts the Puppetmaster's proposal. Playmaker rejects Bohman's proposal. I don't think there's a right answer here, because it's speculative fiction, it's a "What if?" for two different paths people can take in the future.
However, in Bohman's case I don't think he was truly doing what he wanted. Puppet Master became self aware and sought his own answers by breaking free from his programming. Bohman thought he was superior to the Ignis, but in the end he was just following what Lightning programmed him to do. He'd had his identity programmed and reprogrammed so many times, he didn't think of what he wanted until he was on the brink of defeat by playmaker and then it was too late.
Tumblr media Tumblr media
When Playmaker defeats him all he thinks about is time spent together with Haru, with the two of them as individuals. Something he can no longer do anymore now that he's absorbed Haru as data, and something that he misses.
Tumblr media
He's not even all that sad or horrified at the prospect of death as Lightning was, and he even finds solace in the thought of going to oblivion with Haru, because if he were to keep living it'd be without Haru. In other words the one genuine bond he made with someone else by spending time with them as an individual was more important than his objective of fusing with all of humanity - which he believed was also bonding with them.
This is really important too, because it sets up the Yusaku's rejection of fusing with Ai. Yusaku's reasoning has already been demonstrated to be the case with Bohman and Haru. Bohman was perfectly happy being two individuals, as long as he had a bond with his brother. When he ascended into a higher being he lost that. Ai and Yusaku might solve loneliness in a way by merging together into a higher being, they might even last forever that way, but they'd lose something too.
Tumblr media Tumblr media
Once again the problem with AIs is that they think in absolutes. That's important to understanding Lightning, Bohman and even Ai's later actions. Lightning can't stand any percentage chance that he might die, so he kills the professor, destroys the ignis homeworld, pulls the trigger to start humanity himself, he even reprograms his own allies all to give himself some sense of control.
Bohman's entire existence is outside of his control. He's rewritten twice onscreen, probably more than that, and he thinks merging with humanity is the thing that will give him that control - by ascending into a higher being than humanity. However, the temporary bond Bohman had with his brother Haru, was actually what he valued the most all along. Moreso than the idea of fusing with humanity forever.
Even Motoko making the choice to go with the transhumanist option is something that's not portrayed as 100% the right choice. Ghost in the Shell has a sequel that portrays the depression and isolation of Bato, the Major's closest friend and attachment to her humanity after she made the decision to fuse together with Puppet Master. In that case, just like Playmaker said to Ai, even if she ascended to a higher form, and even if she might last forever now on the network, something precious was lost. Motoko may exist somewhere on the netowrk but for Batoto his friend is gone.
Tumblr media Tumblr media
Ai exhibits the same flaw as the previous two, he ca only think in absolutes, he can't stand even a 1% chance that Playmaker might choose to sacrifice himself for Ai and die, so he decides to take the choice entirely out of Playmaker's hands. However, no matter what Ai would have lost Playmaker one day, because all bonds are temporary. It's just Ai wanted to have that sense of control, so he chose to self-destruct and take that agency and free choice away from Playmaker.
It's a tragedy that repeats three times. Ai too just like Bohman, spends his last moments thinking about what was most precious to him was the bond he formed with playmaker, as temporary as it was. A tragedy that arises from the inability of the Ais to break away from the way they're programmed to think in simulations and data, even when they're shown to be capable of forming bonds based on empathy with others.
All three of them add something to the themes of artificial intelligence, and transhumanism that are in play at Vrains and none of them are boring because they all contribute to the whole.
Which is why everyone needs to stop being mean to Bohman right now, or else I'm going to make an even longer essay post defending him.
58 notes · View notes
clara-the-independent · 6 months ago
Text
Tumblr media
Exclusive Interview with Ljudmila Vetrova- Inside Billionaire Nathaniel Thorne's Latest Venture
CLARA: I'm here with my friend Ljudmila Vetrova to talk about the newest venture of reclusive billionaire Nathaniel Thorne- GAMA. Ljudmila, could you let the readers in on the secret- what exactly is this mysterious project about?
LJUDMILA: Sure, Clara! As part of White City's regeneration programme, Nathaniel has teamed up with the Carlise Group to create a cutting-edge medical clinic like no other. Introducing GAMA– a private sanctuary for the discerning, offering not just top-notch medical care and luxurious amenities, but also treatments so innovative they push the envelope of medical science.
CLARA: Wow! Ljudmila, it sounds like GAMA is really taking a proactive approach to healthcare. But can you tell us a bit more about the cutting-edge technology behind this new clinic?
LJUDMILA: Of course! Now, GAMA is not just run by human professionals, it's also aided by an advanced AI system known as KAI – Kronstadt Artificial Intelligence. KAI is the guiding force behind every intricate detail of GAMA, handling everything from calling patients over the PA system to performing complex surgical procedures. Even the doors have a touch of ingenuity, with no keys required- as KAI simply detects the presence of an RFID chip embedded in the clothing of both patients and staff, allowing swift and secure access to the premises. With KAI at the helm, patients and staff alike benefit from streamlined care.
CLARA: A medical AI? That's incredible! I've heard much of the medical technology at GAMA was developed by Kronstadt Industries and the Ether Biotech Corporation, as a cross-disciplinary partnership to create life-saving technology. Is that true?
LJUDMILA: It sure is, Clara! During the COVID-19 pandemic, GAMA even had several departments dedicated to researching the virus, assisting in creating a vaccine with multiple companies. From doctors to nurses and administrative personnel, the team at GAMA is comprised of skilled individuals who are committed to providing the best care possible. All of the GAMA staff are highly educated with advanced degrees and have specialized training in their respective fields.
CLARA: Stunning! Speaking of the GAMA staff, rumors surrounding the hiring of doctors Pavel Frydel and Akane Akenawa have made headlines, with claims that they supposedly transplanted a liver infected with EHV, leading to the unfortunate demise of the patient shortly after. Such allegations might raise questions about the hospital's staff selection process and adherence to medical guidelines and ethical standards. Do you have any comment on these accusations, Ljudmila?
LJUDMILA: Er- well, Clara, the management of GAMA Hospital has vehemently denied all allegations of unethical practices and maintains that they uphold the highest standards of care for all patients. They state that they conduct thorough background checks on all staff members, including doctors, and that any individuals found to be involved in unethical practices are immediately removed from their position. The hospital has a strict code of ethics that all staff must adhere to, and any violations are taken very seriously. In response to the specific claims about the transplant procedure, GAMA states that they are investigating the matter in cooperation with the relevant authorities.
CLARA: Wonderful! I'm afraid that's all we have time for at the moment- lovely chatting with you again, Ljudmila!
@therealharrywatson @artofdeductionbysholmes @johnhwatsonblog
33 notes · View notes
grplindia · 9 months ago
Text
3 notes · View notes
yaamart78 · 11 months ago
Text
0 notes
mariacallous · 4 days ago
Text
Artificial intelligence (AI) is now firmly a part of the hiring process. Some candidates use large language models (LLMs) to write cover letters and resumes, while employers use various proprietary AI systems to evaluate candidates. Recent estimates found as many as 98.4% of Fortune 500 companies leverage AI in the hiring process, and one company saved over a million dollars in a single year by incorporating AI into its interview process. While this figure is lower for non-Fortune 500 companies, it is still expected to grow from 51% to 68% by the end of 2025 because of the potential time and cost savings for employers. However, when these systems are deployed at scale, they can introduce a myriad of biases that can potentially impact millions of job seekers annually.
With more companies choosing to use AI in employment screening, these systems should face more scrutiny to ensure they comply with laws against discrimination. The Equal Employment Opportunity Commission (EEOC) enforces various laws that make it illegal for employers to discriminate against employees or job applicants on the basis of their race, color, religion, sex (including gender identity, sexual orientation, and pregnancy), national origin, age (40 or older), disability, or genetic information. According to guidance published by the EEOC in 2022, using AI systems does not change employers’ responsibility to ensure their selection procedures are not discriminatory, either intentionally or unintentionally. While this guidance was removed when President Donald J. Trump assumed office in January 2025, there has been no change in anti-discrimination laws. Investigations into AI hiring systems continue to be an important tool in evaluating the risks these systems pose and discovering ways to mitigate their potential societal harms. For example, in the U.K., an audit of AI recruitment software revealed multiple fairness and privacy vulnerabilities; in response to these findings, the Information Commissioner’s Office issued nearly 300 recommendations for ways to improve hiring practices that model providers and developers used in their products. 
8 notes · View notes
noticiassincensura · 6 months ago
Text
Former OpenAI Researcher Accuses the Company of Copyright Law Violations
Use of Copyrighted Data in AI Models In a new twist in the world of artificial intelligence, Suchir Balaji, a former researcher at OpenAI, has spoken publicly about the company’s practices and its use of copyrighted data. Balaji, who spent nearly four years working at OpenAI, helped collect and organize large volumes of internet data to train AI models like ChatGPT. However, after reflecting on the legal and ethical implications of this process, he decided to leave the company in August 2024.
What Motivated His Departure? Balaji, 25, admitted that at first, he did not question whether OpenAI had the legal right to use the data it was collecting, much of which was protected by copyright. He assumed that since it was publicly available information on the internet, it was free to use. However, over time, and especially after the launch of ChatGPT in 2022, he began to doubt the legality and ethics of these practices.
“If you believe what I believe, you have to leave the company,” he commented in a series of interviews with The New York Times. For Balaji, using copyrighted data without the creators’ consent was not only a violation of the law but also a threat to the integrity of the internet. This realization led him to resign, although he has not taken another job yet and is currently working on personal projects.
A Growing Problem in AI Concerns about the use of protected data to train AI models are not new. Since companies like OpenAI and other startups began launching tools based on large language models (LLMs), legal and ethical issues have been at the forefront of the debate. These models are trained using vast amounts of text from the internet, often without respecting copyright or seeking the consent of the original content creators.
Balaji is not the only one to raise his voice on this matter. A former vice president of Stability AI, a startup specializing in generative image and audio technologies, has also expressed similar concerns, arguing that using data without authorization is harmful to the industry and society as a whole.
The Impact on the Future of AI Such criticisms raise questions about the future of artificial intelligence and its relationship with copyright laws. As AI models continue to evolve, the pressure on companies to develop ethical and legal technologies is increasing. The case of Balaji and other experts who have decided to step down signals that the AI industry might be facing a significant shift in how it approaches data usage.
The conversation about copyright in AI is far from over, and it seems that this will be a central topic in future discussions about the regulation and development of generative technologies
12 notes · View notes