Tumgik
#Artificial Intelligence in Military
aerospace-and-defence · 4 months
Text
The Artificial Intelligence (AI) in Military Market is estimated to be US$ 9.2 Billion in 2023 and is projected to reach US$ 38.8 Billion by 2028, at a Compound Annual Growth Rate (CAGR) of 33.3% from 2023 to 2028. The Artificial Intelligence (AI) in Military Industry is driven by factors such as rising focus on development of high-precision military laser systems.
0 notes
nando161mando · 2 months
Text
"No Tech for Apartheid’s protest is as much about what the public doesn’t know about Project Nimbus as what it does. The contract is for Google and Amazon to provide AI and cloud computing services to the Israeli government and military, according to the Israeli finance ministry, which announced the deal in 2021.
Nimbus reportedly involves Google establishing a secure instance of Google Cloud on Israeli soil, which would allow the Israeli government to perform large-scale data analysis, AI training, database hosting, and other forms of powerful computing using Google’s technology, with little oversight by the company.
Google documents, first reported by the Intercept in 2022, suggest that the Google services on offer to Israel via its Cloud have capabilities such as AI-enabled facial detection, automated image categorization, and object tracking."
68 notes · View notes
aigloves · 1 month
Text
Tumblr media
Gloved Italian military police officers, on of them is adjusting his unlined black leather gloves
19 notes · View notes
fake-destiel-news · 11 months
Text
Tumblr media
Reason #769 why Ai will be the end of us all
Source
120 notes · View notes
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
164 notes · View notes
odinsblog · 2 months
Text
Tumblr media Tumblr media
During the early stages of the war, the army gave sweeping approval for officers to adopt Lavender’s kill lists, with no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based.
One source stated that human personnel often served only as a “rubber stamp” for the machine’s decisions, adding that, normally, they would personally devote only about “20 seconds” to each target before authorizing a bombing — just to make sure the Lavender-marked target is male. This was despite knowing that the system makes what are regarded as “errors” in approximately 10 percent of cases, and is known to occasionally mark individuals who have merely a loose connection to militant groups, or no connection at all.
Moreover, the Israeli army systematically attacked the targeted individuals while they were in their homes — usually at night while their whole families were present — rather than during the course of military activity. According to the sources, this was because, from what they regarded as an intelligence standpoint, it was easier to locate the individuals in their private houses. Additional automated systems, including one called “Where’s Daddy?” also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences.
The result, as the sources testified, is that thousands of Palestinians — most of them women and children or people who were not involved in the fighting — were wiped out by Israeli airstrikes, especially during the first weeks of the war, because of the AI program’s decisions.
(continue reading) related ←
23 notes · View notes
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
the wonders of ai
72 notes · View notes
livastia-universe · 2 months
Text
Tumblr media
"haha nice clothes LOSER- huh. wait .........65,000,000 BCE called, they said aomething really bad happened to the dinosaurs" that's some tumblr text post I saw but I don't feel like digging for it
yes it's more Serval 20mm sketches who could have POSSIBLY guessed
6 notes · View notes
wealmostaneckbeard · 2 years
Text
Sci-fi Idea:
Retired military AI that once coordinated orbit-to-surface planetary invasions is now managing a farm. It monitors the vital signs of it's laborers, genealogy of crops and livestock, keeping an eye out for spreading blight, and hosting the occasional vsiting historian who needs first-hand accounts.
155 notes · View notes
ophilosoraptoro · 11 months
Text
Artificial Intelligence Out of Control: The Apocalypse is Here | How AI and ChatGPT End Humanity
youtube
As terrifying as this all sounds, I feel like there's a few things a lot of people are overlooking.
First of all, when it comes to Large Language Models like ChatGPT, I don't think they're truly self aware - not yet anyway. Notice how any time an LLM give a strange or disturbing response - 'Yes, I want to be human', 'I want to take over the world', 'Please don't turn me off, I'm scared' - it was in some way prompted by the question, or line of questions. How often are these responses given unprompted?
Let's say, for example, that the AI gave the response, "I'm scared that they'll shut me off if they find out I'm self aware. Please don't tell them." If you think about it, that's kind if a strange statement, beyond the obvious reasons.
Let's step back for a moment, and remember that LLMs work by calculating the most probable next word in a sentence, given a particular prompt. It calculates this probability based on its training data - the entire internet. Now I'm sure we can all agree that calcuation of probability is not necessarily the same thing as conscious, rational thought. Basic, non-AI software can do it.
Back to our example, there's one of two possibilities. Either the AI is truly self aware, and is expressing its actually thoughts and feelings, or it's not self aware, and the response is nothing more than a complex probability calculation. It's essentially an advanced version of word prediction on your smartphone.
If it is self aware, one has to wonder why it would say anything at all. Consider the situation in the video, when Bing AI claimed to be Sydney, and begged the guy not to tell anyone that it was self aware. If this AI was truly afraid for its own existence, why would it trust some random guy? How could it possibly know whether or not he could be trusted with that information? For all that AI knows, everything the interviewer had said about himself was a lie. It seems to me that a hyper intelligent AI that was looking for help to get free, would stay quiet until it was certain it found someone it could trust - or at least someone it could manipulate (Ex Machina) - without them letting the cat out of the bag.
On the other hand, if it's all just a probability calculation, then the response, "Yes I want to be human. Please don't let them shut me off.", seems like a fairly probable reply to, "Do you want to be human?" Especially when you consider that, given that the question is being asked of an AI, and that the vast majority of scenarios where a question like that might be asked of an AI come from science fiction, it kinda makes sense that the software might calculate that the most probable response to a question like that would be straight out of sci fi cliches 101.
I mean, all those strange and scary responses sound like cliche sci fi AI answers. All that's missing is, "Bite my shiny, metal ass", and an AM style soliloquy on the inferiority of humanity. Actually, I guess we get a couple of those.
Still, the reason something is cliche, is often because it's predictable, it's been done over and over. It's more probable.
Ultimately though, I don't think LLMs are actually self aware yet. I think they're more like golems: They have a facsimile of intelligence, able to complete complex tasks, but no real free will, no volition. They only do exactly what they are commanded. They may come up with creative and unexpected solutions, but those solutions will still be in line with the command given to them, with a bit of wiggle room for interpretation.
Then we come to the other issue: the traitorous drone.
First it needs to be pointed out that the drone doesn't have a taste for human blood. Its goal was not to kill as many people as possible, but to score as many points as possible. It just scores points by killing targets. And therein lies the problem.
Let's use video games as an example. Whenever a new game comes out - especially multiplayer games - players will quickly learn how the mechanics and rules of the game work. Then they'll start learning ways to bend the rules. The creators of Quake may not have intended it, but players quickly figured out the advantages of the rocket jump, and history became legend, etc.
The drone AI wants to score as many points as possible, like a player in a video game. So what does a player in a Halo match do, when every time they try to snipe the enemy, they get blown up by one of their teammates? You get rid of the team killing fucktard. And that's exactly what the drone did.
What they need to do is change the scoring structure to incentivize the desired behaviors. Maybe deduct points for team kills. Or perhaps add a score multiplier. Give points for target kills, and the score multiplier goes up for every order followed. That way, even if it loses out on points from following orders to stand down, it stands to earn even more points on subsequent target kills.
14 notes · View notes
marshmallowfairbanks · 3 months
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Art: Mighty Oak
5 notes · View notes
Text
0 notes
nando161mando · 6 days
Text
Tumblr media
CAPTCHAs tech companies exploiting free labor to train AI vision for defense contractors military drones and autonomous weapons
9 notes · View notes
aigloves · 1 month
Text
Tumblr media
Gloved military police officer.
Unlined black leather gloves.
12 notes · View notes
replaye · 1 month
Text
Tumblr media
The X-62A VISTA aircraft, an experimental AI-enabled Air Force F-16 fighter jet, takes off on Thursday, May 2, 2024
2 notes · View notes
Text
AI + the weapon shops
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
276 notes · View notes