Tumgik
#software testing risks
compunnelinc · 4 months
Text
What Is Risk-Based Testing in QA and How to Prioritize Tests?
In the dynamic world of software development, ensuring quality while managing tight deadlines is crucial. Risk-based testing (RBT) emerges as a powerful strategy in Quality Assurance (QA) to address this challenge. This blog delves into the essentials of RBT, explaining how it focuses on identifying and mitigating the most significant risks in a software project. Discover the core principles of RBT, its benefits, and practical steps to implement this approach effectively. Learn how to prioritize your tests based on risk assessment, ensuring that your testing efforts are both efficient and impactful. Whether you're a QA professional or a project manager, this guide will equip you with the knowledge to enhance your testing strategy and deliver high-quality software with confidence.
Read more: https://www.compunnel.com/blogs/risk-based-testing-in-qa-prioritizing-tests-for-maximum-impact/
0 notes
caramel12345 · 11 months
Text
Tumblr media
HR Innovations & Growth | FYI Solutions
Innovate your HR practices for sustained growth and competitive advantage. Visit today https://fyisolutions.com
0 notes
news4nose · 1 year
Link
Do you know that there are hidden, harmful commands within that normal-looking code? Instead of relying on copy-pasting, take the time to learn and understand the commands you’re using. Learning the basics of command-line operations can go a long way in helping you.
0 notes
garymdm · 1 year
Text
Poor data quality significantly hinders banks ability to present accurate regulatory reports
Poor #dataquality significantly hinders bank's ability to present accurate regulatory reports
Recent research conducted by Aite Group highlighted the link between poor data quality and regulatory penalties, further creating a case for financial services organisations to improUnderstanding Data Quality Dimension: Consistencyve their data quality. Poor data quality can lead to an adverse conclusion by regulators about a bank’s ability to accurately monitor business risks. Focus on data…
Tumblr media
View On WordPress
0 notes
consultingwives · 2 months
Text
As someone who works in the reliability sector of IT I cannot emphasize how much you have to give 0 fucks about professional standards and best practices in order to do something like what Crowdstrike did.
At the company I work for, which you have definitely heard of, there are thousands of people (including me, hi) part of whose job it is to sit in rooms for literal hours every week with the people building new features and updating our software and ask them every question we can possibly think of about how their changes might impact the overall system and what potential risks there are. We brainstorm how to minimize those risks, impose requirements on the developers, and ultimately the buck stops with us. Some things are just too risky.
Many of the practices developed at this and other companies are now in wide use across the industry, including things like staggered rollouts (i.e. only 1/3 people get this update at first, then 2/3, then everyone) and multi-stage testing (push it to a fake system we set up for these purposes, see what it does).
In cases where you’re updating firmware or an os, there are physical test devices you need to update and verify that everything behaves as expected. If you really care about your customers you’ll hand the device to someone who works on a different system altogether and tell them to do their worst.
The bottom line here is that if Crowdstrike were following anything even resembling industry best practices there should have been about twenty failsafes between a kernel bug and a global update that bricked basically every enterprise machine in the world. This is like finding out the virus lab has a direct HVAC connection to the big conference room. There is genuinely no excuse for this kind of professional incompetence.
1K notes · View notes
Text
#Blog | Risk-based testing is an important approach to software testing as it helps ensure that testing efforts focus on the most critical areas of the software or system. It helps improve the effectiveness and efficiency of the entire process.
0 notes
Text
Three AI insights for hard-charging, future-oriented smartypantses
Tumblr media
MERE HOURS REMAIN for the Kickstarter for the audiobook for The Bezzle, the sequel to Red Team Blues, narrated by @wilwheaton! You can pre-order the audiobook and ebook, DRM free, as well as the hardcover, signed or unsigned. There’s also bundles with Red Team Blues in ebook, audio or paperback.
Tumblr media
Living in the age of AI hype makes demands on all of us to come up with smartypants prognostications about how AI is about to change everything forever, and wow, it's pretty amazing, huh?
AI pitchmen don't make it easy. They like to pile on the cognitive dissonance and demand that we all somehow resolve it. This is a thing cult leaders do, too – tell blatant and obvious lies to their followers. When a cult follower repeats the lie to others, they are demonstrating their loyalty, both to the leader and to themselves.
Over and over, the claims of AI pitchmen turn out to be blatant lies. This has been the case since at least the age of the Mechanical Turk, the 18th chess-playing automaton that was actually just a chess player crammed into the base of an elaborate puppet that was exhibited as an autonomous, intelligent robot.
The most prominent Mechanical Turk huckster is Elon Musk, who habitually, blatantly and repeatedly lies about AI. He's been promising "full self driving" Telsas in "one to two years" for more than a decade. Periodically, he'll "demonstrate" a car that's in full-self driving mode – which then turns out to be canned, recorded demo:
https://www.reuters.com/technology/tesla-video-promoting-self-driving-was-staged-engineer-testifies-2023-01-17/
Musk even trotted an autonomous, humanoid robot on-stage at an investor presentation, failing to mention that this mechanical marvel was just a person in a robot suit:
https://www.siliconrepublic.com/machines/elon-musk-tesla-robot-optimus-ai
Now, Musk has announced that his junk-science neural interface company, Neuralink, has made the leap to implanting neural interface chips in a human brain. As Joan Westenberg writes, the press have repeated this claim as presumptively true, despite its wild implausibility:
https://joanwestenberg.com/blog/elon-musk-lies
Neuralink, after all, is a company notorious for mutilating primates in pursuit of showy, meaningless demos:
https://www.wired.com/story/elon-musk-pcrm-neuralink-monkey-deaths/
I'm perfectly willing to believe that Musk would risk someone else's life to help him with this nonsense, because he doesn't see other people as real and deserving of compassion or empathy. But he's also profoundly lazy and is accustomed to a world that unquestioningly swallows his most outlandish pronouncements, so Occam's Razor dictates that the most likely explanation here is that he just made it up.
The odds that there's a human being beta-testing Musk's neural interface with the only brain they will ever have aren't zero. But I give it the same odds as the Raelians' claim to have cloned a human being:
https://edition.cnn.com/2003/ALLPOLITICS/01/03/cf.opinion.rael/
The human-in-a-robot-suit gambit is everywhere in AI hype. Cruise, GM's disgraced "robot taxi" company, had 1.5 remote operators for every one of the cars on the road. They used AI to replace a single, low-waged driver with 1.5 high-waged, specialized technicians. Truly, it was a marvel.
Globalization is key to maintaining the guy-in-a-robot-suit phenomenon. Globalization gives AI pitchmen access to millions of low-waged workers who can pretend to be software programs, allowing us to pretend to have transcended the capitalism's exploitation trap. This is also a very old pattern – just a couple decades after the Mechanical Turk toured Europe, Thomas Jefferson returned from the continent with the dumbwaiter. Jefferson refined and installed these marvels, announcing to his dinner guests that they allowed him to replace his "servants" (that is, his slaves). Dumbwaiters don't replace slaves, of course – they just keep them out of sight:
https://www.stuartmcmillen.com/blog/behind-the-dumbwaiter/
So much AI turns out to be low-waged people in a call center in the Global South pretending to be robots that Indian techies have a joke about it: "AI stands for 'absent Indian'":
https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain
A reader wrote to me this week. They're a multi-decade veteran of Amazon who had a fascinating tale about the launch of Amazon Go, the "fully automated" Amazon retail outlets that let you wander around, pick up goods and walk out again, while AI-enabled cameras totted up the goods in your basket and charged your card for them.
According to this reader, the AI cameras didn't work any better than Tesla's full-self driving mode, and had to be backstopped by a minimum of three camera operators in an Indian call center, "so that there could be a quorum system for deciding on a customer's activity – three autopilots good, two autopilots bad."
Amazon got a ton of press from the launch of the Amazon Go stores. A lot of it was very favorable, of course: Mister Market is insatiably horny for firing human beings and replacing them with robots, so any announcement that you've got a human-replacing robot is a surefire way to make Line Go Up. But there was also plenty of critical press about this – pieces that took Amazon to task for replacing human beings with robots.
What was missing from the criticism? Articles that said that Amazon was probably lying about its robots, that it had replaced low-waged clerks in the USA with even-lower-waged camera-jockeys in India.
Which is a shame, because that criticism would have hit Amazon where it hurts, right there in the ole Line Go Up. Amazon's stock price boost off the back of the Amazon Go announcements represented the market's bet that Amazon would evert out of cyberspace and fill all of our physical retail corridors with monopolistic robot stores, moated with IP that prevented other retailers from similarly slashing their wage bills. That unbridgeable moat would guarantee Amazon generations of monopoly rents, which it would share with any shareholders who piled into the stock at that moment.
See the difference? Criticize Amazon for its devastatingly effective automation and you help Amazon sell stock to suckers, which makes Amazon executives richer. Criticize Amazon for lying about its automation, and you clobber the personal net worth of the executives who spun up this lie, because their portfolios are full of Amazon stock:
https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5
Amazon Go didn't go. The hundreds of Amazon Go stores we were promised never materialized. There's an embarrassing rump of 25 of these things still around, which will doubtless be quietly shuttered in the years to come. But Amazon Go wasn't a failure. It allowed its architects to pocket massive capital gains on the way to building generational wealth and establishing a new permanent aristocracy of habitual bullshitters dressed up as high-tech wizards.
"Wizard" is the right word for it. The high-tech sector pretends to be science fiction, but it's usually fantasy. For a generation, America's largest tech firms peddled the dream of imminently establishing colonies on distant worlds or even traveling to other solar systems, something that is still so far in our future that it might well never come to pass:
https://pluralistic.net/2024/01/09/astrobezzle/#send-robots-instead
During the Space Age, we got the same kind of performative bullshit. On The Well David Gans mentioned hearing a promo on SiriusXM for a radio show with "the first AI co-host." To this, Craig L Maudlin replied, "Reminds me of fins on automobiles."
Yup, that's exactly it. An AI radio co-host is to artificial intelligence as a Cadillac Eldorado Biaritz tail-fin is to interstellar rocketry.
Tumblr media Tumblr media
Back the Kickstarter for the audiobook of The Bezzle here!
Tumblr media
If you’d like an essay-formatted version of this post to read or share, here’s a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/01/31/neural-interface-beta-tester/#tailfins
1K notes · View notes
luminlogic · 2 years
Text
Tumblr media
Best-in-Class Software for Product Lifecycle Development
By accelerating the creation and circulation of documents, we sell your product. Our specialists will create plans for risk management and keep records. Contact us today
0 notes
rieamena · 2 months
Note
I wonder at what point in the "reboot saga" would the other cunning hares step in and help Billy? Like, on one hand you have a convenient way to stop Billy from whatever he is doing, and watching how Y/N is trying to confess without crashing him must be entertaining. On the other after crash 65 it must get worrying :/
finally!—
Tumblr media
the first few crashes had been amusing, a source of lighthearted teasing among the group. you’d attempt to confess, and billy, ever the charismatic and responsive robot, would suddenly freeze, eyes flickering as his system struggled to process the influx of data. the scene would end with him rebooting, and the cycle would start anew. after the first couple of crashes, the laughter faded into concern
“i don’t get it,” you muttered, sprawled out on the couch in the cunning hares' common room. “why does he keep crashing? it’s just a confession.”
“he’s not built to handle that kind of emotional intensity,” nicole explained, fiddling with the handles on his jacket, metal body limp after yet another of your failed confessions. “his programming is complex, but at the core, it’s still a machine trying to process human emotions.”
“and you’re very special to him,” anby added, smiling gently. “that makes it even harder for his system to cope.”
the three of you brainstormed solutions, testing different approaches and environmental controls. they installed cooling systems, tweaked his software, and even practiced mock confessions. yet, each time you poured your heart out to billy, his system would crash and reboot, leaving you both in a loop of unfinished sentences and unspoken feelings
one night, after crash number seventy two—a number that was only devised due to your intricate logs of attempted confessions in your mini journal—the serious gravity of the situation hit everyone. billy’s constant reboots were taking a toll on his system, and the risk of permanent damage was becoming too great to ignore
“this has to stop,” nicole declared, her voice heavy with determination. “we need to find a way to get through to him without causing another crash.”
after much debate, the team devised a new strategy. it wasn’t just about cooling fans and air conditioners; it was about creating a space where billy could process his emotions without the threat of overload. they set up a room specifically for this purpose, equipped with not just temperature controls but also calming visuals and sounds designed to keep billy’s system stable
the designated spot was meticulously prepared. soft lighting filled the room, creating a warm and inviting atmosphere. the hum of air conditioners and strategically placed fans ensured the environment was cool. in the center of the room, billy sat on a cushioned chair, looking a bit puzzled but the aura he exuded was always happy
anby gave you a reassuring nod as she adjusted a fan to blow directly at billy. "remember, y/n, stick to the script and stay calm. we’re right here with you."
you took a deep breath and approached billy, your heart pounding. "hey, billy," you greeted, your voice steady despite the butterflies in your stomach
"hey, [name]," he replied, the crescents of his eyes lighting up the room. "what’s up?"
you clutched the script tightly, glancing at the words one last time before looking up at him. "billy, there’s something i’ve been wanting to tell you for a long time. it’s been on my mind, and i need you to know."
billy’s eyes widened slightly, his full attention on you. you continued, your voice soft but clear, following the script's guidance. "you mean a lot to me, more than just a friend. whenever i’m with you, everything feels brighter and better. your laughter, your kindness, the way you always know how to make me smile. i cherish every moment we spend together."
billy blinked, processing your words. the fans hummed softly, maintaining a cool breeze. you took another deep breath, steadying yourself. "billy, i like you. a lot. more than just a friend. i care about you deeply, and i wanted you to know how i feel."
for a moment, there was silence. billy’s eyes flickered, and you held your breath, waiting for the familiar signs of a reboot, slower movement, glitched speech, loss of composure, but instead, his eyes displayed bright red hearts
"[name]," he said softly, reaching out to take your hand. "i… i like you too. more than just a friend." nicole crept over to a cooling fan close to him, cranking up its power
unfortunately, the slip of paper didn't have any more words to refer to so you had to improvise. "so does this mean we're like, dating now?"
"are we really?! we're dating now?!" billy jumped up from his seat, practically oozing excitement and happiness, "wait, but i've never had a partner before. what if i do something wrong? what if you don't like me anymore?!" he shook your shoulders, speaking a mile a minute, ranting about all the things he could do wrong and all the things that could go wrong
"also, it's really cold in here, i can almost feel my metal constricting! can we turn the thermostat up or something?"
you couldn't help but laugh. "one step at a time, billy. let's start with the thermostat."
you finally got billy kid after seventy two reboots, and boy, wasn't it rewarding.
Tumblr media
its actually so embarassing how long this took and its not even good....
billy kid taglist
@pedrosimp137 @mary-moongood @nyxin-lynx @lemonboy011 @eisblume77
@amaryllisenvy @megan017 @astral-spacepumpkin @corrupted-tale @inkycap
@thurstonw @plapsha @lavenderthewolf @kurakusun @vitaevaaa
@sweetadonisbutbetter @cobraaah @mochiitoby @clickingchip @bardivislak
@h3r6c00k13 @cozi-cofee @apestegui-y @luvuyuuji @theitdoitnobody
@fersitaam @cathrnxxo @monkepawbz @fl1ghtl3ssdrag0n @dabislilbaby
@many-names-yuna @muffin1304 @doort @j3llycarnival @juuanna
@discipleofthem @spookylorekeep @wazkalia @miaubrebmiau @hersweetsstrawberry
96 notes · View notes
mariacallous · 1 month
Text
At the 2023 Defcon hacker conference in Las Vegas, prominent AI tech companies partnered with algorithmic integrity and transparency groups to sic thousands of attendees on generative AI platforms and find weaknesses in these critical systems. This “red-teaming” exercise, which also had support from the US government, took a step in opening these increasingly influential yet opaque systems to scrutiny. Now, the ethical AI and algorithmic assessment nonprofit Humane Intelligence is taking this model one step further. On Wednesday, the group announced a call for participation with the US National Institute of Standards and Technology, inviting any US resident to participate in the qualifying round of a nationwide red-teaming effort to evaluate AI office productivity software.
The qualifier will take place online and is open to both developers and anyone in the general public as part of NIST's AI challenges, known as Assessing Risks and Impacts of AI, or ARIA. Participants who pass through the qualifying round will take part in an in-person red-teaming event at the end of October at the Conference on Applied Machine Learning in Information Security (CAMLIS) in Virginia. The goal is to expand capabilities for conducting rigorous testing of the security, resilience, and ethics of generative AI technologies.
“The average person utilizing one of these models doesn’t really have the ability to determine whether or not the model is fit for purpose,” says Theo Skeadas, chief of staff at Humane Intelligence. “So we want to democratize the ability to conduct evaluations and make sure everyone using these models can assess for themselves whether or not the model is meeting their needs.”
The final event at CAMLIS will split the participants into a red team trying to attack the AI systems and a blue team working on defense. Participants will use the AI 600-1 profile, part of NIST's AI risk management framework, as a rubric for measuring whether the red team is able to produce outcomes that violate the systems' expected behavior.
“NIST's ARIA is drawing on structured user feedback to understand real-world applications of AI models,” says Humane Intelligence founder Rumman Chowdhury, who is also a contractor in NIST's Office of Emerging Technologies and a member of the US Department of Homeland Security AI safety and security board. “The ARIA team is mostly experts on sociotechnical test and evaluation, and [is] using that background as a way of evolving the field toward rigorous scientific evaluation of generative AI.”
Chowdhury and Skeadas say the NIST partnership is just one of a series of AI red team collaborations that Humane Intelligence will announce in the coming weeks with US government agencies, international governments, and NGOs. The effort aims to make it much more common for the companies and organizations that develop what are now black-box algorithms to offer transparency and accountability through mechanisms like “bias bounty challenges,” where individuals can be rewarded for finding problems and inequities in AI models.
“The community should be broader than programmers,” Skeadas says. “Policymakers, journalists, civil society, and nontechnical people should all be involved in the process of testing and evaluating of these systems. And we need to make sure that less represented groups like individuals who speak minority languages or are from nonmajority cultures and perspectives are able to participate in this process.”
79 notes · View notes
ab121500 · 3 months
Text
FO4 Companions React to a Blind!Sole
Sole being blind Pre-war (TW for Ableism) 
I've never written one of these before, but my most recent  Sole is blind so I thought I'd put my idea of how I think the companions would react. 
Cait: Doesn't believe that Sole is blind at first. They're too good of a shot, too aware of what's going on to be blind. But she notices that Sole does struggle with things those with sight wouldn't even bat an eye at. They often face the wrong way when being told to read something, they also are terrible at keeping watch. She at first is mad that she has to "pick up the slack" but she quickly gets over it after Sole treats her with kindness and understanding. Doesn't take long for her to stand up for Sole in every situation.
Codsworth: Sole's blindness was something that he got used to not long after being purchased. As a Mr. Handy, he has software that allows him to assist those who are disabled. It never affected how Sole treated him, nor his perception of them. In the post war setting, with the world being so different from what Sole remembers, he tries to assist in even more ways than he used to. Going with Sole to Concord for example, or at least wanting to when they denied his request. He doesn't treat Sole any differently from before, and tries not to handle them with kid gloves knowing how much they hate that.
Curie: Sees Sole as a curiosity. At first she often tries to run little tests to figure out the severity of their blindness and the extent of their capabilities. She's a doctor, she wants to know the extent of their blindness and what she can do to potentially assist. That being said, after being called out she knocks it off and switches her focus on listening to Sole and what they want. She wants to be helpful, unaware of how condescending she can be at times.
Deacon: Thinks Sole is extremely interesting from the moment he sees them the first time. Their ability to get around without much trouble and handle the wasteland without sight is fascinating. He appreciates that Sole can't see him following them around, and when Sole comes around to get the courser chip decoded he still vouches for them. Still, due to the nature of the Railroad and the fact they use road signs and need to keep an eye out he worries Sole won't be a good agent. He doesn't want to risk any synth getting caught or hurt because Sole didn't notice the danger. Overall he respects Sole, but thinks Sole is best on Railroad jobs with him or Glory by their side. He does get annoyed that their PipBoy has voice to text on, as it gives them away when sneaking but he can look past that.
Danse: The fact that a citizen came in guns blazing to save him and his squad got his respect, but once he noticed their blindness he instantly feels like Sole is a liability. People being blind wasn't the most uncommon thing in the brotherhood, but those who were blind often weren't allowed on the field anymore. More often than not they were asked to more or less retire to not put their teammates at risk. He tries to tactfully get Sole to leave, but he knows he needs the extra firepower to get the transmitter. He appreciates the help but is wary about sponsoring them even if they proved their capabilities. After becoming his newest charge, he acts like they are the same as any recruit. That being said, it takes him a long time to not handle Sole like a helpless child. He knows Sole can handle themselves, but he can't help but treat them a little differently. It takes being pulled aside and called out before he realizes he's even doing that, and he apologizes and tries his hardest to knock it off.
Dogmeat: Confused at first but he's a good smart boy. Happy to be Sole's eyes instead.
Hancock: At first, he thought Sole was just a cold, badass bitch. No reaction to a man being stabbed in front of them, AND no reaction to him being a ghoul? But the realization hit him quickly after, and he didn't know how to feel. Sole was clearly street smart enough to not be taken advantage of, but that didn't mean people didn't suck. Honestly, if anyone belonged in the town full of misfits who look out for each other, it was Sole. He treats them like they treat him, like a person that is just like everyone else. He is guilty of subtly moving things out of Sole's way or tapping them so they face the right direction, but it's never really noticeable. It's similar to moving something out of someone's way when they are distracted or tapping someone's shoulder to get their attention.  
Maccready: He assumed that he was being hired to be a bodyguard. After all, being blind in the wastes was not easy and they'd need protection. Didn't take long for him to realize Sole is a damn good shot and is able to take care of themself. He deduces that Sole hired him not as a bodyguard, but as a companion, someone who was company on the road so they had someone to talk to. And if he just so happened to notice if someone was trying to rip Sole off he'd be able to call them out. He doesn't treat Sole any differently, before or after his quest. Before, they're his employer he's not saying shit and after they're his friend and he's not going to risk that relationship. 
Nick: He knew something was up the moment Sole didn't even slightly react to his appearance. Didn't panic seeing him, nor asked what his deal was. The fact they were able to get him out without being able to see, using a terminal and all got his respect. But he was still nervous that a blind person was using a gun, or he was until by some miracle Sole landed at least 6 head-shots in a row. Once being free, the interview to try and find Shaun was difficult. No physical characteristics to go on, the guess that it was Kellogg was honestly just lucky. Regardless, Sole treated Nick like a person from the moment they met, and he'd be lying that it wasn't nice to just be seen as normal. It's only fair he returns the favor.
Piper: Sole was already extremely interesting to Piper, but add on the fact they can't see and she's instantly more interested. What's life in the Commonwealth like if you can't see it? What was life like before the bombs? How do you use your weapon so effectively? She overcompensates for Sole's lack of sight by being reckless, especially when they first travel together. She probably forgets often that Sole is blind, often being like "Hey did you see that Blue?" And then remembering and apologizing. She does get better with time, and definitely publishes papers calling out people/businesses who try to take advantage of Sole. Any papers in the Publick that Sole is in she reads out loud to them.
Preston: He's just relieved that someone came to help them in the Museum. He couldn’t give a shit that Sole is blind, he cares that they're willing to help. The only hiccup is getting the fusion core in the basement since Sturges can't get it and Sole can't get it either. They offer to try and are lucky enough to get it first try and grab the fusion core. After meeting back at Sanctuary he opts to not mention it as it's none of his business. By the time he feels comfortable enough around Sole to ask about it, they're friends and he doesn't want them to be uncomfortable so he never brings it up. He's the type to gently point Sole in the right direction if they're looking the wrong way in conversions or moving stuff out of the way if they might trip on them. He also makes sure that Sanctuary and the Castle are as consistent as possible so that Sole has an easier time navigating them.
Strong: does not appreciate the puny human not being able to see. How can they help him find the milk of human kindness if they can't even see it? He sees Soul as a weakling and refuses to travel with them.
X6: Doesn't care that Sole is Father's parent, Sole is a liability and cannot handle themself as well as a person with sight can. He doesn't like being paired with Sole.
Old Longfellow: Honestly he kind of envies them. They never have to see the horrors of the fog like he has. But he's aware that one day he might lose his sight too. Besides, the thickness of the fog even having sight isn't much of an advantage. He helps Sole the same way he helps all travelers to Acadia, with the added bonus that Sole can defend themselves. 
Gage: Helps sole at first because he can't tell they can't see, instantly turns on them after they become overboss due to blindness being a weakness.
71 notes · View notes
iww-gnv · 7 months
Text
As firms increasingly rely on artificial intelligence-driven hiring platforms, many highly qualified candidates are finding themselves on the cutting room floor. Body-language analysis. Vocal assessments. Gamified tests. CV scanners. These are some of the tools companies use to screen candidates with artificial intelligence recruiting software. Job applicants face these machine prompts – and AI decides whether they are a good match or fall short. Businesses are increasingly relying on them. A late-2023 IBM survey of more than 8,500 global IT professionals showed 42% of companies were using AI screening "to improve recruiting and human resources". Another 40% of respondents were considering integrating the technology. Many leaders across the corporate world hoped AI recruiting tech would end biases in the hiring process. Yet in some cases, the opposite is happening. Some experts say these tools are inaccurately screening some of the most qualified job applicants – and concerns are growing the software may be excising the best candidates. "We haven't seen a whole lot of evidence that there's no bias here… or that the tool picks out the most qualified candidates," says Hilke Schellmann, US-based author of the Algorithm: How AI Can Hijack Your Career and Steal Your Future, and an assistant professor of journalism at New York University. She believes the biggest risk such software poses to jobs is not machines taking workers' positions, as is often feared – but rather preventing them from getting a role at all.
98 notes · View notes
cyberstudious · 1 month
Text
Tumblr media
An Introduction to Cybersecurity
I created this post for the Studyblr Masterpost Jam, check out the tag for more cool masterposts from folks in the studyblr community!
What is cybersecurity?
Cybersecurity is all about securing technology and processes - making sure that the software, hardware, and networks that run the world do exactly what they need to do and can't be abused by bad actors.
The CIA triad is a concept used to explain the three goals of cybersecurity. The pieces are:
Confidentiality: ensuring that information is kept secret, so it can only be viewed by the people who are allowed to do so. This involves encrypting data, requiring authentication before viewing data, and more.
Integrity: ensuring that information is trustworthy and cannot be tampered with. For example, this involves making sure that no one changes the contents of the file you're trying to download or intercepts your text messages.
Availability: ensuring that the services you need are there when you need them. Blocking every single person from accessing a piece of valuable information would be secure, but completely unusable, so we have to think about availability. This can also mean blocking DDoS attacks or fixing flaws in software that cause crashes or service issues.
What are some specializations within cybersecurity? What do cybersecurity professionals do?
incident response
digital forensics (often combined with incident response in the acronym DFIR)
reverse engineering
cryptography
governance/compliance/risk management
penetration testing/ethical hacking
vulnerability research/bug bounty
threat intelligence
cloud security
industrial/IoT security, often called Operational Technology (OT)
security engineering/writing code for cybersecurity tools (this is what I do!)
and more!
Where do cybersecurity professionals work?
I view the industry in three big chunks: vendors, everyday companies (for lack of a better term), and government. It's more complicated than that, but it helps.
Vendors make and sell security tools or services to other companies. Some examples are Crowdstrike, Cisco, Microsoft, Palo Alto, EY, etc. Vendors can be giant multinational corporations or small startups. Security tools can include software and hardware, while services can include consulting, technical support, or incident response or digital forensics services. Some companies are Managed Security Service Providers (MSSPs), which means that they serve as the security team for many other (often small) businesses.
Everyday companies include everyone from giant companies like Coca-Cola to the mom and pop shop down the street. Every company is a tech company now, and someone has to be in charge of securing things. Some businesses will have their own internal security teams that respond to incidents. Many companies buy tools provided by vendors like the ones above, and someone has to manage them. Small companies with small tech departments might dump all cybersecurity responsibilities on the IT team (or outsource things to a MSSP), or larger ones may have a dedicated security staff.
Government cybersecurity work can involve a lot of things, from securing the local water supply to working for the big three letter agencies. In the U.S. at least, there are also a lot of government contractors, who are their own individual companies but the vast majority of what they do is for the government. MITRE is one example, and the federal research labs and some university-affiliated labs are an extension of this. Government work and military contractor work are where geopolitics and ethics come into play most clearly, so just… be mindful.
What do academics in cybersecurity research?
A wide variety of things! You can get a good idea by browsing the papers from the ACM's Computer and Communications Security Conference. Some of the big research areas that I'm aware of are:
cryptography & post-quantum cryptography
machine learning model security & alignment
formal proofs of a program & programming language security
security & privacy
security of network protocols
vulnerability research & developing new attack vectors
Cybersecurity seems niche at first, but it actually covers a huge range of topics all across technology and policy. It's vital to running the world today, and I'm obviously biased but I think it's a fascinating topic to learn about. I'll be posting a new cybersecurity masterpost each day this week as a part of the #StudyblrMasterpostJam, so keep an eye out for tomorrow's post! In the meantime, check out the tag and see what other folks are posting about :D
36 notes · View notes
freckledboss · 7 months
Text
Pepper isn't one for tampering with things, especially mechanical and technological devices she's not familiar with. But the lab was empty, and she was merely checking in on the progress Tony had made when it happened. A sudden flash of light, blinding and intense. Then her surroundings became rather dark, body going limp, mind slipping into unconsciousness. This perhaps was a classic case of in the wrong place, at wrong time. Or maybe fate had a hand to play. No matter how it came about, one thing is for certain. She' may be in danger if this isn't sorted out.
The redhead stirs to the sound of birds chirping, and the smell of nature all around. Am I outside? Her head feels a little sore, memory is a bit fuzzy. She slowly sits up and blinks into the bright sunlight, a hand lifting to help see better.
Everything looks different... she glances down and realizes she's sitting in a pile of hay? This is definitely not the lab, or Stark Industries.
Pepper stands then, a tad unbalanced until she gains her footing--stilettoes and dirt do not mix--and is able to really examine the place. A barn... I'm inside a barn? How... Panic is beginning to creep inside, but she refuses to let it consume her just yet. There has to be a logical explanation.
"Mr. Stark, if this is some sort of new simulation software you're testing out, I would appreciate the heads up next time," assuming, of course, this is an illusion. "Or maybe warn me not to enter the lab if you're planning on experimenting with new equipment? You know how I feel about potentially hazardous situations..." because lets face it, Tony and risk go hand in hand. "Mr. Stark?"
@mr-tony-stark
75 notes · View notes
usafphantom2 · 2 months
Text
Tumblr media
B-2 Gets Big Upgrade with New Open Mission Systems Capability
July 18, 2024 | By John A. Tirpak
The B-2 Spirit stealth bomber has been upgraded with a new open missions systems (OMS) software capability and other improvements to keep it relevant and credible until it’s succeeded by the B-21 Raider, Northrop Grumman announced. The changes accelerate the rate at which new weapons can be added to the B-2; allow it to accept constant software updates, and adapt it to changing conditions.
“The B-2 program recently achieved a major milestone by providing the bomber with its first fieldable, agile integrated functional capability called Spirit Realm 1 (SR 1),” the company said in a release. It announced the upgrade going operational on July 17, the 35th anniversary of the B-2’s first flight.
SR 1 was developed inside the Spirit Realm software factory codeveloped by the Air Force and Northrop to facilitate software improvements for the B-2. “Open mission systems” means that the aircraft has a non-proprietary software architecture that simplifies software refresh and enhances interoperability with other systems.
“SR 1 provides mission-critical capability upgrades to the communications and weapons systems via an open mission systems architecture, directly enhancing combat capability and allowing the fleet to initiate a new phase of agile software releases,” Northrop said in its release.
The system is intended to deliver problem-free software on the first go—but should they arise, correct software issues much earlier in the process.
The SR 1 was “fully developed inside the B-2 Spirit Realm software factory that was established through a partnership with Air Force Global Strike Command and the B-2 Systems Program Office,” Northrop said.
The Spirit Realm software factory came into being less than two years ago, with four goals: to reduce flight test risk and testing time through high-fidelity ground testing; to capture more data test points through targeted upgrades; to improve the B-2’s functional capabilities through more frequent, automated testing; and to facilitate more capability upgrades to the jet.
The Air Force said B-2 software updates which used to take two years can now be implemented in less than three months.
In addition to B61 or B83 nuclear weapons, the B-2 can carry a large number of precision-guided conventional munitions. However, the Air Force is preparing to introduce a slate of new weapons that will require near-constant target updates and the ability to integrate with USAF’s evolving long-range kill chain. A quicker process for integrating these new weapons with the B-2’s onboard communications, navigation, and sensor systems was needed.
The upgrade also includes improved displays, flight hardware and other enhancements to the B-2’s survivability, Northrop said.
“We are rapidly fielding capabilities with zero software defects through the software factory development ecosystem and further enhancing the B-2 fleet’s mission effectiveness,” said Jerry McBrearty, Northrop’s acting B-2 program manager.
The upgrade makes the B-2 the first legacy nuclear weapons platform “to utilize the Department of Defense’s DevSecOps [development, security, and operations] processes and digital toolsets,” it added.
The software factory approach accelerates adding new and future weapons to the stealth bomber, and thus improve deterrence, said Air Force Col. Frank Marino, senior materiel leader for the B-2.
The B-2 was not designed using digital methods—the way its younger stablemate, the B-21 Raider was—but the SR 1 leverages digital technology “to design, manage, build and test B-2 software more efficiently than ever before,” the company said.
The digital tools can also link with those developed for other legacy systems to accomplish “more rapid testing and fielding and help identify and fix potential risks earlier in the software development process.”
Following two crashes in recent years, the stealthy B-2 fleet comprises 19 aircraft, which are the only penetrating aircraft in the Air Force’s bomber fleet until the first B-21s are declared to have achieved initial operational capability at Ellsworth Air Force Base, S.D. A timeline for IOC has not been disclosed.
The B-2 is a stealthy, long-range, penetrating nuclear and conventional strike bomber. It is based on a flying wing design combining LO with high aerodynamic efficiency. The aircraft’s blended fuselage/wing holds two weapons bays capable of carrying nearly 60,000 lb in various combinations.
Spirit entered combat during Allied Force on March 24, 1999, striking Serbian targets. Production was completed in three blocks, and all aircraft were upgraded to Block 30 standard with AESA radar. Production was limited to 21 aircraft due to cost, and a single B-2 was subsequently lost in a crash at Andersen, Feb. 23, 2008.
Modernization is focused on safeguarding the B-2A’s penetrating strike capability in high-end threat environments and integrating advanced weapons.
The B-2 achieved a major milestone in 2022 with the integration of a Radar Aided Targeting System (RATS), enabling delivery of the modernized B61-12 precision-guided thermonuclear freefall weapon. RATS uses the aircraft’s radar to guide the weapon in GPS-denied conditions, while additional Flex Strike upgrades feed GPS data to weapons prerelease to thwart jamming. A B-2A successfully dropped an inert B61-12 using RATS on June 14, 2022, and successfully employed the longer-range JASSM-ER cruise missile in a test launch last December.
Ongoing upgrades include replacing the primary cockpit displays, the Adaptable Communications Suite (ACS) to provide Link 16-based jam-resistant in-flight retasking, advanced IFF, crash-survivable data recorders, and weapons integration. USAF is also working to enhance the fleet’s maintainability with LO signature improvements to coatings, materials, and radar-absorptive structures such as the radome and engine inlets/exhausts.
Two B-2s were damaged in separate landing accidents at Whiteman on Sept. 14, 2021, and Dec. 10, 2022, the latter prompting an indefinite fleetwide stand-down until May 18, 2023. USAF plans to retire the fleet once the B-21 Raider enters service in sufficient numbers around 2032.
Contractors: Northrop Grumman; Boeing; Vought.
First Flight: July 17, 1989.
Delivered: December 1993-December 1997.
IOC: April 1997, Whiteman AFB, Mo.
Production: 21.
Inventory: 20.
Operator: AFGSC, AFMC, ANG (associate).
Aircraft Location: Edwards AFB, Calif.; Whiteman AFB, Mo.
Active Variant: •B-2A. Production aircraft upgraded to Block 30 standards.
Dimensions: Span 172 ft, length 69 ft, height 17 ft.
Weight: Max T-O 336,500 lb.
Power Plant: Four GE Aviation F118-GE-100 turbofans, each 17,300 lb thrust.
Performance: Speed high subsonic, range 6,900 miles (further with air refueling).
Ceiling: 50,000 ft.
Armament: Nuclear: 16 B61-7, B61-12, B83, or eight B61-11 bombs (on rotary launchers). Conventional: 80 Mk 62 (500-lb) sea mines, 80 Mk 82 (500-lb) bombs, 80 GBU-38 JDAMs, or 34 CBU-87/89 munitions (on rack assemblies); or 16 GBU-31 JDAMs, 16 Mk 84 (2,000-lb) bombs, 16 AGM-154 JSOWs, 16 AGM-158 JASSMs, or eight GBU-28 LGBs.
Accommodation: Two pilots on ACES II zero/zero ejection seats.
21 notes · View notes
Text
Hypothetical AI election disinformation risks vs real AI harms
Tumblr media
I'm on tour with my new novel The Bezzle! Catch me TONIGHT (Feb 27) in Portland at Powell's. Then, onto Phoenix (Changing Hands, Feb 29), Tucson (Mar 9-12), and more!
Tumblr media
You can barely turn around these days without encountering a think-piece warning of the impending risk of AI disinformation in the coming elections. But a recent episode of This Machine Kills podcast reminds us that these are hypothetical risks, and there is no shortage of real AI harms:
https://soundcloud.com/thismachinekillspod/311-selling-pickaxes-for-the-ai-gold-rush
The algorithmic decision-making systems that increasingly run the back-ends to our lives are really, truly very bad at doing their jobs, and worse, these systems constitute a form of "empiricism-washing": if the computer says it's true, it must be true. There's no such thing as racist math, you SJW snowflake!
https://slate.com/news-and-politics/2019/02/aoc-algorithms-racist-bias.html
Nearly 1,000 British postmasters were wrongly convicted of fraud by Horizon, the faulty AI fraud-hunting system that Fujitsu provided to the Royal Mail. They had their lives ruined by this faulty AI, many went to prison, and at least four of the AI's victims killed themselves:
https://en.wikipedia.org/wiki/British_Post_Office_scandal
Tenants across America have seen their rents skyrocket thanks to Realpage's landlord price-fixing algorithm, which deployed the time-honored defense: "It's not a crime if we commit it with an app":
https://www.propublica.org/article/doj-backs-tenants-price-fixing-case-big-landlords-real-estate-tech
Housing, you'll recall, is pretty foundational in the human hierarchy of needs. Losing your home – or being forced to choose between paying rent or buying groceries or gas for your car or clothes for your kid – is a non-hypothetical, widespread, urgent problem that can be traced straight to AI.
Then there's predictive policing: cities across America and the world have bought systems that purport to tell the cops where to look for crime. Of course, these systems are trained on policing data from forces that are seeking to correct racial bias in their practices by using an algorithm to create "fairness." You feed this algorithm a data-set of where the police had detected crime in previous years, and it predicts where you'll find crime in the years to come.
But you only find crime where you look for it. If the cops only ever stop-and-frisk Black and brown kids, or pull over Black and brown drivers, then every knife, baggie or gun they find in someone's trunk or pockets will be found in a Black or brown person's trunk or pocket. A predictive policing algorithm will naively ingest this data and confidently assert that future crimes can be foiled by looking for more Black and brown people and searching them and pulling them over.
Obviously, this is bad for Black and brown people in low-income neighborhoods, whose baseline risk of an encounter with a cop turning violent or even lethal. But it's also bad for affluent people in affluent neighborhoods – because they are underpoliced as a result of these algorithmic biases. For example, domestic abuse that occurs in full detached single-family homes is systematically underrepresented in crime data, because the majority of domestic abuse calls originate with neighbors who can hear the abuse take place through a shared wall.
But the majority of algorithmic harms are inflicted on poor, racialized and/or working class people. Even if you escape a predictive policing algorithm, a facial recognition algorithm may wrongly accuse you of a crime, and even if you were far away from the site of the crime, the cops will still arrest you, because computers don't lie:
https://www.cbsnews.com/sacramento/news/texas-macys-sunglass-hut-facial-recognition-software-wrongful-arrest-sacramento-alibi/
Trying to get a low-waged service job? Be prepared for endless, nonsensical AI "personality tests" that make Scientology look like NASA:
https://futurism.com/mandatory-ai-hiring-tests
Service workers' schedules are at the mercy of shift-allocation algorithms that assign them hours that ensure that they fall just short of qualifying for health and other benefits. These algorithms push workers into "clopening" – where you close the store after midnight and then open it again the next morning before 5AM. And if you try to unionize, another algorithm – that spies on you and your fellow workers' social media activity – targets you for reprisals and your store for closure.
If you're driving an Amazon delivery van, algorithm watches your eyeballs and tells your boss that you're a bad driver if it doesn't like what it sees. If you're working in an Amazon warehouse, an algorithm decides if you've taken too many pee-breaks and automatically dings you:
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
If this disgusts you and you're hoping to use your ballot to elect lawmakers who will take up your cause, an algorithm stands in your way again. "AI" tools for purging voter rolls are especially harmful to racialized people – for example, they assume that two "Juan Gomez"es with a shared birthday in two different states must be the same person and remove one or both from the voter rolls:
https://www.cbsnews.com/news/eligible-voters-swept-up-conservative-activists-purge-voter-rolls/
Hoping to get a solid education, the sort that will keep you out of AI-supervised, precarious, low-waged work? Sorry, kiddo: the ed-tech system is riddled with algorithms. There's the grifty "remote invigilation" industry that watches you take tests via webcam and accuses you of cheating if your facial expressions fail its high-tech phrenology standards:
https://pluralistic.net/2022/02/16/unauthorized-paper/#cheating-anticheat
All of these are non-hypothetical, real risks from AI. The AI industry has proven itself incredibly adept at deflecting interest from real harms to hypothetical ones, like the "risk" that the spicy autocomplete will become conscious and take over the world in order to convert us all to paperclips:
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space
Whenever you hear AI bosses talking about how seriously they're taking a hypothetical risk, that's the moment when you should check in on whether they're doing anything about all these longstanding, real risks. And even as AI bosses promise to fight hypothetical election disinformation, they continue to downplay or ignore the non-hypothetical, here-and-now harms of AI.
There's something unseemly – and even perverse – about worrying so much about AI and election disinformation. It plays into the narrative that kicked off in earnest in 2016, that the reason the electorate votes for manifestly unqualified candidates who run on a platform of bald-faced lies is that they are gullible and easily led astray.
But there's another explanation: the reason people accept conspiratorial accounts of how our institutions are run is because the institutions that are supposed to be defending us are corrupt and captured by actual conspiracies:
https://memex.craphound.com/2019/09/21/republic-of-lies-the-rise-of-conspiratorial-thinking-and-the-actual-conspiracies-that-fuel-it/
The party line on conspiratorial accounts is that these institutions are good, actually. Think of the rebuttal offered to anti-vaxxers who claimed that pharma giants were run by murderous sociopath billionaires who were in league with their regulators to kill us for a buck: "no, I think you'll find pharma companies are great and superbly regulated":
https://pluralistic.net/2023/09/05/not-that-naomi/#if-the-naomi-be-klein-youre-doing-just-fine
Institutions are profoundly important to a high-tech society. No one is capable of assessing all the life-or-death choices we make every day, from whether to trust the firmware in your car's anti-lock brakes, the alloys used in the structural members of your home, or the food-safety standards for the meal you're about to eat. We must rely on well-regulated experts to make these calls for us, and when the institutions fail us, we are thrown into a state of epistemological chaos. We must make decisions about whether to trust these technological systems, but we can't make informed choices because the one thing we're sure of is that our institutions aren't trustworthy.
Ironically, the long list of AI harms that we live with every day are the most important contributor to disinformation campaigns. It's these harms that provide the evidence for belief in conspiratorial accounts of the world, because each one is proof that the system can't be trusted. The election disinformation discourse focuses on the lies told – and not why those lies are credible.
That's because the subtext of election disinformation concerns is usually that the electorate is credulous, fools waiting to be suckered in. By refusing to contemplate the institutional failures that sit upstream of conspiracism, we can smugly locate the blame with the peddlers of lies and assume the mantle of paternalistic protectors of the easily gulled electorate.
But the group of people who are demonstrably being tricked by AI is the people who buy the horrifically flawed AI-based algorithmic systems and put them into use despite their manifest failures.
As I've written many times, "we're nowhere near a place where bots can steal your job, but we're certainly at the point where your boss can be suckered into firing you and replacing you with a bot that fails at doing your job"
https://pluralistic.net/2024/01/15/passive-income-brainworms/#four-hour-work-week
The most visible victims of AI disinformation are the people who are putting AI in charge of the life-chances of millions of the rest of us. Tackle that AI disinformation and its harms, and we'll make conspiratorial claims about our institutions being corrupt far less credible.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/02/27/ai-conspiracies/#epistemological-collapse
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
145 notes · View notes