Tumgik
#obviously do not mess with Amazon drones
quadruple-a · 1 year
Text
Tumblr media
This, this is the university they are giving Amazon drones to. The one who puts ride share bikes in trees and on top of buildings to the point that both the veo company and the university had to send out messages. The school famous for it’s mechanical and computer engineering programs. The school with a long military history. Out of all the schools, you picked that one.
I give it a week tops
7 notes · View notes
metastable1 · 4 years
Link
October 10, 2016.
Four years ago, on a daylong hike with friends north of San Francisco, Altman relinquished the notion that human beings are singular. As the group discussed advances in artificial intelligence, Altman recognized, he told me, that “there’s absolutely no reason to believe that in about thirteen years we won’t have hardware capable of replicating my brain. Yes, certain things still feel particularly human—creativity, flashes of inspiration from nowhere, the ability to feel happy and sad at the same time—but computers will have their own desires and goal systems. When I realized that intelligence can be simulated, I let the idea of our uniqueness go, and it wasn’t as traumatic as I thought.” He stared off. “There are certain advantages to being a machine. We humans are limited by our input-output rate—we learn only two bits a second, so a ton is lost. To a machine, we must seem like slowed-down whale songs.”
OpenAI, the nonprofit that Altman founded with Elon Musk, is a hedged bet on the end of human predominance—a kind of strategic-defense initiative to protect us from our own creations. OpenAI was born of Musk’s conviction that an A.I. could wipe us out by accident. The problem of managing powerful systems that lack human values is exemplified by “the paperclip maximizer,” a scenario that the Swedish philosopher Nick Bostrom raised in 2003. If you told an omnicompetent A.I. to manufacture as many paper clips as possible, and gave it no other directives, it could mine all of Earth’s resources to make paper clips, including the atoms in our bodies—assuming it didn’t just kill us outright, to make sure that we didn’t stop it from making more paper clips. OpenAI was particularly concerned that Google’s DeepMind Technologies division was seeking a supreme A.I. that could monitor the world for competitors. Musk told me, “If the A.I. that they develop goes awry, we risk having an immortal and superpowerful dictator forever.” He went on, “Murdering all competing A.I. researchers as its first move strikes me as a bit of a character flaw.”
It was clear what OpenAI feared, but less clear what it embraced. In May, Dario Amodei, a leading A.I. researcher then at Google Brain, came to visit the office, and told Altman and Greg Brockman, the C.T.O., that no one understood their mission. They’d raised a billion dollars and hired an impressive team of thirty researchers—but what for? “There are twenty to thirty people in the field, including Nick Bostrom and the Wikipedia article,” Amodei said, “who are saying that the goal of OpenAI is to build a friendly A.I. and then release its source code into the world.”
“We don’t plan to release all of our source code,” Altman said. “But let’s please not try to correct that. That usually only makes it worse.”
“But what is the goal?” Amodei asked.
Brockman said, “Our goal right now . . . is to do the best thing there is to do. It’s a little vague.”
That’s disturbing but it seems that OpenAI got its act together since then.
A.I. technology hardly seems almighty yet. After Microsoft launched a chatbot, called Tay, bullying Twitter users quickly taught it to tweet such remarks as “gas the kikes race war now”; the recently released “Daddy’s Car,” the first pop song created by software, sounds like the Beatles, if the Beatles were cyborgs. But, Musk told me, “just because you don’t see killer robots marching down the street doesn’t mean we shouldn’t be concerned.” Apple’s Siri, Amazon’s Alexa, and Microsoft’s Cortana serve millions as aides-de-camp, and simultaneous-translation and self-driving technologies are now taken for granted. Y Combinator has even begun using an A.I. bot, Hal9000, to help it sift admission applications: the bot’s neural net trains itself by assessing previous applications and those companies’ outcomes. “What’s it looking for?” I asked Altman. “I have no idea,” he replied. “That’s the unsettling thing about neural networks—you have no idea what they’re doing, and they can’t tell you.”
OpenAI’s immediate goals, announced in June, include a household robot able to set and clear a table. One longer-term goal is to build a general A.I. system that can pass the Turing test—can convince people, by the way it reasons and reacts, that it is human. Yet Altman believes that a true general A.I. should do more than deceive; it should create, discovering a property of quantum physics or devising a new art form simply to gratify its own itch to know and to make. While many A.I. researchers were correcting errors by telling their systems, “That’s a dog, not a cat,” OpenAI was focussed on having its system teach itself how things work. “Like a baby does?” I asked Altman. “The thing people forget about human babies is that they take years to learn anything interesting,” he said. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.” Altman felt that OpenAI’s mission was to babysit its wunderkind until it was ready to be adopted by the world. He’d been reading James Madison’s notes on the Constitutional Convention for guidance in managing the transition. “We’re planning a way to allow wide swaths of the world to elect representatives to a new governance board,” he said. “Because if I weren’t in on this I’d be, like, Why do these fuckers get to decide what happens to me?” Under Altman, Y Combinator was becoming a kind of shadow United Nations, and increasingly he was making Secretary-General-level decisions. Perhaps it made sense to entrust humanity to someone who doesn’t seem all that interested in humans. “Sam’s program for the world is anchored by ideas, not people,” Peter Thiel said. “And that’s what makes it powerful—because it doesn’t immediately get derailed by questions of popularity.” Of course, that very combination of powerful intent and powerful unconcern is what inspired OpenAI: how can an unfathomable intelligence protect us if it doesn’t care what we think? This spring, Altman met Ashton Carter, the Secretary of Defense, in a private room at a San Francisco trade show. Altman wore his only suit jacket, a bunchy gray number his assistant had tricked him into getting measured for on a trip to Hong Kong. Carter, in a pin-striped suit, got right to it. “Look, a lot of people out here think we’re big and clunky. And there’s the Snowden overhang thing, too,” he said, referring to the government’s treatment of Edward Snowden. “But we want to work with you in the Valley, tap the expertise.” “Obviously, that would be great,” Altman said. “You’re probably the biggest customer in the world.” The Defense Department’s proposed research-and-development spending next year is more than double that of Apple, Google, and Intel combined. “But a lot of startups are frustrated that it takes a year to get a response from you.” Carter aimed his forefinger at his temple like a gun and pulled the trigger. Altman continued, “If you could set up a single point of contact, and make decisions on initiating pilot programs with YC companies within two weeks, that would help a lot.” “Great,” Carter said, glancing at one of his seven aides, who scribbled a note. “What else?” Altman thought for a while. “If you or one of your deputies could come speak to YC, that would go a long way.” “I’ll do it myself,” Carter promised. As everyone filed out, Chris Lynch, a former Microsoft executive who heads Carter’s digital division, told Altman, “It would have been good to talk about OpenAI.” Altman nodded noncommittally. The 2017 U.S. military budget allocates three billion dollars for human-machine collaborations known as Centaur Warfighting, and a long-range missile that will make autonomous targeting decisions is in the pipeline for the following year. Lynch later told me that an OpenAI system would be a natural fit. Altman was of two minds about handing OpenAI products to Lynch and Carter. “I unabashedly love this country, which is the greatest country in the world,” he said. At Stanford, he worked on a DARPA project involving drone helicopters. “But some things we will never do with the Department of Defense.” He added, “A friend of mine says, ‘The thing that saves us from the Department of Defense is that, though they have a ton of money, they’re not very competent.’ But I feel conflicted, because they have the world’s best cyber command.” Altman, by instinct a cleaner-up of messes, wanted to help strengthen our military—and then to defend the world from its newfound strength. [...] On a trip to New York, Altman dropped by my apartment one Saturday to discuss how tech was transforming our view of who we are. Curled up on the sofa, knees to his chin, he said, “I remember thinking, when Deep Blue beat Garry Kasparov, in 1997, Why does anyone care about chess anymore? And now I’m very sad about us losing to DeepMind’s AlphaGo,” which recently beat a world-champion Go player. “I’m on Team Human. I don’t have a good logical reason why I’m sad, except that the class of things that humans are better at continues to narrow.” After a moment, he added, “ ‘Melancholy’ is a better word than ‘sad.’ ” Many people in Silicon Valley have become obsessed with the simulation hypothesis, the argument that what we experience as reality is in fact fabricated in a computer; two tech billionaires have gone so far as to secretly engage scientists to work on breaking us out of the simulation. To Altman, the danger stems not from our possible creators but from our own creations. “These phones already control us,” he told me, frowning at his iPhone SE. “The merge has begun—and a merge is our best scenario. Any version without a merge will have conflict: we enslave the A.I. or it enslaves us. The full-on-crazy version of the merge is we get our brains uploaded into the cloud. I’d love that,” he said. “We need to level up humans, because our descendants will either conquer the galaxy or extinguish consciousness in the universe forever. What a time to be alive!” Some futurists—da Vinci, Verne, von Braun—imagine technologies that are decades or centuries off. Altman assesses current initiatives and threats, then focusses on pragmatic actions to advance or impede them. Nothing came of Paul Graham’s plan for tech to stop Donald Trump, but Altman, after brooding about Trump for months, recently announced a nonpartisan project, called VotePlz, aimed at getting out the youth vote. Looking at the election as a tech problem—what’s the least code with the most payoff?—Altman and his three co-founders concentrated on helping young people in nine swing states to register, by providing them with registration forms and stamps. By Election Day, VotePlz’s app may even be configured to call an Uber to take you to the polls. Synthetic viruses? Altman is planning a synthetic-biology unit within YC Research that could thwart them. Aging and death? He hopes to fund a parabiosis company, to place the rejuvenative elixir of youthful blood into an injection. “If it works,” he says, “you will still die, but you could get to a hundred and twenty being pretty healthy, then fail quickly.” Human obsolescence? He is thinking about establishing a group to prepare for our eventual successor, whether it be an A.I. or an enhanced version of Homo sapiens. The idea would be to assemble thinkers in robotics, cybernetics, quantum computing, A.I., synthetic biology, genomics, and space travel, as well as philosophers, to discuss the technology and the ethics of human replacement. For now, leaders in those fields are meeting semi-regularly at Altman’s house; the group jokingly calls itself the Covenant. As Altman gazes ahead, emotion occasionally clouds his otherwise spotless windscreen. He told me, “If you believe that all human lives are equally valuable, and you also believe that 99.5 per cent of lives will take place in the future, we should spend all our time thinking about the future.” His voice dropped. “But I do care much more about my family and friends.” He asked me how many strangers I would allow to die—or would kill with my own hands, which seemed to him more intellectually honest—in order to spare my loved ones. As I considered this, he said that he’d sacrifice a hundred thousand. I told him that my own tally would be even larger. “It’s a bug,” he declared, unconsoled. He was happier viewing the consequences of innovation as a systems question. The immediate challenge is that computers could put most of us out of work. Altman’s fix is YC Research’s Basic Income project, a five-year study, scheduled to begin in 2017, of an old idea that’s suddenly in vogue: giving everyone enough money to live on. Expanding on earlier trials in places such as Manitoba and Uganda, YC will give as many as a thousand people in Oakland an annual sum, probably between twelve thousand and twenty-four thousand dollars. The problems with the idea seem as basic as the promise: Why should people who don’t need a stipend get one, too? Won’t free money encourage indolence? And the math is staggering: if you gave each American twenty-four thousand dollars, the annual tab would run to nearly eight trillion dollars—more than double the federal tax revenue. However, Altman told me, “The thing most people get wrong is that if labor costs go to zero”—because smart robots have eaten all the jobs—“the cost of a great life comes way down. If we get fusion to work and electricity is free, then transportation is substantially cheaper, and the cost of electricity flows through to water and food. People pay a lot for a great education now, but you can become expert level on most things by looking at your phone. So, if an American family of four now requires seventy thousand dollars to be happy, which is the number you most often hear, then in ten to twenty years it could be an order of magnitude cheaper, with an error factor of 2x. Excluding the cost of housing, thirty-five hundred to fourteen thousand dollars could be all a family needs to enjoy a really good life.” In the best case, tech will be so transformative that Altman won’t have to choose between the few and the many. When A.I. reshapes the economy, he told me, “we’re going to have unlimited wealth and a huge amount of job displacement, so basic income really makes sense. Plus, the stipend will free up that one person in a million who can create the next Apple.”
About the last three paragraphs, it’s worth to point out that human-level AI (and beyond) isn’t some new kind of Roomba or another gadget with blinking lights, so I am a little bit surprised by musings about job displacement and UBI (when AGI is assumed). Post AGI world will not look like the current world but with robotic arms in place of Amazon’s warehouse workers and now Altman seems to get it.
1 note · View note
Text
Inside the Arena Where Drones Battle a Wall of 1,300 Computer Fans
New Post has been published on https://computerguideto.com/must-see/inside-the-arena-where-drones-battle-a-wall-of-1300-computer-fans/
Inside the Arena Where Drones Battle a Wall of 1,300 Computer Fans
Wind is the worst. It messes up hair, it blows stuff in eyes, and most famously and rudely of all, one time it made a bridge in Washington twist and undulate until it exploded.
Alright, maybe that was the fault of the engineers, not the wind. But still, strong gusts have the potential to threaten many technologies, including a new one: drones. If you’ve ever taken a quadcopter out on a windy day, you know the struggle. Now consider that in the near future, our cities will be swarming with delivery drones—and if we don’t want them plummeting out of the skies, they'll have to learn to survive the elements.
For that, engineers have Caltech’s fancy new drone arena, where the machines face terrifying atmospheric disturbances. While your classical wind tunnel uses one or maybe a few big fans to blast air for testing aerodynamics, this system employs a 10-foot-by-10-foot wall of nearly 1,300 CPU cooling fans, each of which can vary in its speed. “That allows us to practically simulate any kind of extreme weather, from gusts to turbulence to a vortex or a sort of mini hurricane,” says Caltech aerospace engineer Morteza Gharib.
Wired
So say you’ve got a one-fifth scale model of the drone ambulance you’re developing, which of course Caltech has. The idea being to reach people in forest fires or mudslides, then transport them five or six miles away, thus keeping pilots out of harm’s way.
Given the preciousness of the cargo, you’re going to want a smooth ride. As with a full-size rescue helicopter, a traditional quadrotor has to tilt forward to accelerate. Not ideal for the comfort of the patient. This tilting also creates a lot of vibration. Also not ideal. So Caltech researchers are opting for a hybrid design that can lift off and land vertically like a helicopter, yet cruise nice and level like a plane, thanks to fixed wings.
Obviously engineers know how fixed-wing aircraft and helicopters work in a wind tunnel. But a hybrid design, that’s more intricate. “When you put them together as a system, it has not been tried before—it's a very novel design in that respect,” Gharib says. “So this wind tunnel allows us to expose this machine to extreme weather.”
Oddly enough, you can find some of the most extreme weather conditions in cities. If you’ve ever walked among skyscrapers and thought it was needlessly gusty, you’re not crazy. It’s thanks to a phenomenon called the Venturi effect, in which winds that are constricted to, say, the space between buildings, end up accelerating. (Try this home with a fan and a cardboard box.)
Those atmospheric dynamics are going to run smack into future fleets of delivery drones. It’ll be tough enough for engineers to figure out a system that keeps drones from a bunch of different companies like Amazon and Google from colliding. (Don’t worry, NASA’s working on it.) But gusts of wind will complicate that communication—what happens when an unanticipated swoosh blows a drone into a building, or another drone? That’s why engineers have to use wind tunnels like Caltech’s to better understand how different wind conditions affect these vehicles individually, and as a group.
And if you think that sounds tough, wait until you try flying on Mars. In addition to its terrestrial experiments, Caltech is working with NASA to test a Mars helicopter, which could accompany future rovers as a scout. The idea is to test in the drone arena, then do further testing in a vacuum chamber to simulate the thinner atmosphere of Mars. “Of course we cannot change gravity,” says Gharib, “but we can test whether this machine can fly in much thinner air,” he says.
The future of flight both at home and far, far abroad is taking shape in the Caltech drone arena. Take that, wind.
More drone awesomeness
Delivery drivers beware: the skies are about to be filled with package-delivering drones. But will this really be any more efficient?
What's a drone, you might ask? We've got your complete guide here.
Drones are useful for all kinds of things, really, including lifting power lines into place in Puerto Rico.
Related Video
Science
To Make Better Robots, You Gotta Crash Tiny Drones Into People First
Welcome to the wild world of human-robot interaction, the quest to get humans and robots collaborating without hurting each other.
Read more: http://www.wired.com/
0 notes