Welcome to Robotics Cloud! Topics related: Robots and AI news, Technology & Cloud recommendations.
Don't wanna be here? Send us removal request.
Photo

Kick-back to What is Artificial Intelligence?
Artificial Intelligence is a machine based intelligence that can remember, think, decide and act without human intervention. AI can be broken up into 3 forms, namely:
1. Artificial Narrow Intelligence - Machine intelligence that is developed to focus on a single task. Examples of this can be seen in search engines, Siri and Robo-Advisors. 2. Artificial General Intelligence - Machine intelligence that is able to cover a broad range of functions, similar to that of human intelligence. Examples of this form have yet to come into existence with experts predicting this level of AI to be reached within a range of 30 to 60 years. 3. Super AI - Sentient machine intelligence that exponentially surpasses human intelligence. Examples of this are yet to be seen (experts predict this form of AI to come into being within 2 days of Broad AI being achieved).
6 notes
·
View notes
Photo

THE AI COWORKER
The notion of a flawless machine translator has captured the imagination of programmers and the public alike.
Google’s advances in neural machine translation implied that the grail was within reach—and, along with it, the moment when human workers become obsolete.
We’re not living in the golden age of AI, but we are living in the golden age of AI-enhanced productivity. A technological leap will always bear holdouts. There will be people who can’t stand the thought of collaborating with machines, and who would rather bury their heads in their idea journals and pretend that nothing is changing.
4 notes
·
View notes
Photo

AI is going to carry Lenovo’s business
How’s this possible? You may ask. Artificial intelligence (AI) companies' revenue projections are increasing at a fast pace. A global artificial intelligence (AI) powerhouse, computer giant Lenovo Group has stepped up use of the technology to drive operational efficiency, improve customer service and advance the company’s ambitious business development initiatives.
Machine learning is a type of AI focused on computer programs that have the ability to learn when exposed to new data.
Cloud computing enables users to buy, sell, lease or distribute over the internet a range of digital resources as an on-demand service, just like electricity from a power grid.
5 notes
·
View notes
Text
Robot Clings Underwater with a Force 340 Times its Own Weight
This robot was inspired by the remora, which are fish that cling onto sharks and whales, feeding on the dead skin and feces of their hosts.
A robot inspired by a hitchhiking fish can cling to surfaces underwater with a force 340 times its own weight.
The new bot was inspired by the remora, fish that cling to larger marine animals like sharks and whales, feeding off their hosts' dead skin and feces.
Remora fish do this with a specially adapted fin on their undersides called a suction disc, which consists of a soft, circular "lip" and linear rows of tissue called lamellae. The lamellae sport tiny, needle-like spinules. The Remora can use tiny muscles around the disc to change its shape to suction itself to the host; the spinules then provide major gripping power by adding friction to the equation.
"Biologists say that it represents one of the most extraordinary adaptions within the vertebrates," said Li Wen, a robotics and biomechanics researcher at Beihang University in China and the lead author of a new paper describing the remora robot. [7 Clever Technologies Inspired by Nature]
Fishy inspiration
Wen said he got the idea for a remora-inspired robot when he was a postdoctoral researcher at Harvard University. He and his advisors were working on designing 3D-printed sharkskin. When looking up photos to use in a paper, Wen said, he kept seeing these odd little hangers-on in photos of sharks. They were remoras. Struck by the fact that no one had tried to make a biorobotic remora disc, Wen and his colleagues decided to tackle the project themselves.
To do so, they had to come up with a way to create a disc with sections ranging from downright rigid to skin-soft. The researchers used 3D printing to pull off this feat, and then added approximately 1,000 faux spinules made of laser-cut carbon fiber. To allow the disc to move just like a real remora disc, the researchers embedded six pneumatic actuators — basically little air pockets — that could inflate and deflate on cue.
The result looks a bit like one of those shaving razors with far too many blades, just larger. The robot measures about 5 inches (13 centimeters) from end to end.
Ride-along robot
To test this fishy bot, the researchers attached it underwater to a variety of surfaces, some rough, some smooth, some rigid and some flexible. These included real mako shark skin, plexiglass, epoxy resin and silicone elastomer. The robot clung quite nicely to all the surfaces, the researchers found.
The force needed to pull the remora robot off the smooth plexiglass was about 436 newtons, which translates to 340 times the weight of the robot itself. On rougher surfaces, the bot clung a little less tightly. It took about 167 newtons of force to pull the bot off real sharkskin, for instance.
Finally, the researchers attached their disc to a real remotely operated underwater vehicle and practiced attaching the ROV by the disc, to various surfaces. They had a 100 percent success rate attaching the disc to the same range of surfaces that they'd tested before, with an average time to attach of less than 4 seconds, the study said.
"The rigid spinules and soft material overlaying the lamellae engage with the surface when rotated, just like discs of live remora," Wen told Live Science.
While adhesive robots are nothing new, the remora is one of the first options roboticists have had for underwater attachments. Other sticky bots, like ones inspired by tree frogs and geckos, don't work well when submerged. The remora bot could be used to attach things to any broad underwater surface, Wen said, or to allow an underwater robot to cruise along on the underside of a boat.
4 notes
·
View notes
Text
ROBOTS ON PRIVATE SECTOR JOBS
Four million jobs in the British private sector could be replaced by robots in the next decade, according to business leaders asked about the future of automation and artificial intelligence.
The potential impact amounts to 15% of the current workforce in the sector and emerged in a poll conducted by YouGov for the Royal Society of Arts, whose chief executive, Matthew Taylor, has been advising Downing Street on the future of modern work.
Jobs in finance and accounting, transport and distribution and in media, marketing, and advertising are most likely to be automated in the next decade, the research says.
The RSA’s prediction of the impact of robotics on working lives is lower than some other estimates. Four years ago, academics at the University of Oxford predicted 35% of jobs could be rendered obsolete by new technology, while the Bank of England predicted in 2015 that up to 15m jobs in Britain were at risk from robots “hollowing out” the workforce.
The RSA is also more optimistic about the potential of robots and artificial intelligence than US tech billionaire Elon Musk, who has said AI was “the scariest problem” and “our biggest existential threat” because he predicts, they will be able to do everything better than humans.
Research by the University of Oxford and Deloitte last year predicted more than 850,000 public sector jobs could be lost by 2030 through automation.
Asda operates a fully automated distribution warehouse in west London; white-collar tasks are being automated by PwC, the accountancy firm, and Linklaters, the law firm, which have been developing software robots that use artificial intelligence to learn to do research tasks usually undertaken by junior accountants and lawyers.
The RSA warns that artificial intelligence and robotics will “undoubtedly cause the loss of some jobs, whether it is autonomous vehicles pushing taxi drivers out of business or picking and packing robots usurping warehouse workers”. But it argues that new technologies could phase out mundane jobs, raise productivity levels and so deliver higher wages and “allow workers to concentrate on more human-centric roles that are beyond the reach of machines”.
It found that business leaders largely believed that new technologies were more likely to alter jobs rather than eliminate them and that this, combined with the creation of new types of jobs, would lead to greater prosperity in the long run.
Care homes are also trialing robots. One in Lincoln plans to use one to help residents remember daily necessities such as taking medication. The robot will also monitor their movements and habits as a nurse would. A care company in London, Three Sisters Home Care, will soon try the use of robots for lifting people so only one care worker will be needed rather than two.
Three Sisters’ chief executive, Jobeda Ali, told the researchers: “If I don’t have to send a person to do a transfer job [lifting], I can send them to have a cup of tea and a chat. This is a much better use of their time than carrying patients or cooking meals.”
The prediction that millions of jobs will be lost to robots led the Trades Union Congress to warn against “shredding good jobs”.
“The UK must make the most of the economic opportunities that new technologies offer,” said Frances O’Grady, general secretary of the TUC. “Robots and AI could let us produce more for less, boosting national prosperity. But we need to talk about who benefits – and how workers get a fair share. The productivity gains must be used to improve pay and conditions for workers.”
Benedict Dellot, the author of the report, said the technical limitations on robots, evidenced so far by driverless cars crashing and the difficulty of getting robots to read at an adult level, would restrict the speed with which jobs will be automated.
The RSA has also warned that Britain needs to invest more in robots or risk falling further behind countries including the US, France, Germany, Spain, and Italy where companies are buying more robots than in the UK.
“AI and robotics could solve some of the gaps and problems in the labor market with low-paid, dull, dirty, dangerous jobs that nobody really wants to fill,” Dellot said. “The technology has the potential to fundamentally improve productivity levels in the UK.”
The report also warns that increasing automation could deepen economic inequality and “demographic biases could become further entrenched”. It argues that to avoid this policymaker should take control of the development of the technology by creating an ethical framework to guide the behavior of AI and to encourage investment in “benevolent technology that enriches the worker experience”.
3 notes
·
View notes
Text
API - application program interface
Application program interface (API) is a set of routines, protocols, and tools for building software applications. An API specifies how software components should interact and APIs are used when programming graphical user interface (GUI) components. A good API makes it easier to develop a program by providing all the building blocks. A programmer then puts the blocks together.
Types of APIs
There are many different types of APIs for operating systems, applications or websites. Windows, for example, has many API setsthat are used by system hardware and applications — when you copy and paste text from one application to another, it is the API that allows that to work.
Most operating environments, such as MS-Windows, provide APIs, allowing programmers to write applications consistent with the operating environment. Today, APIs are also specified by websites. For example, Amazon or eBay APIs allow developers to use the existing retail infrastructure to create specialized web stores. Third-party software developers also use Web APIs to create software solutions for end-users.
6 notes
·
View notes
Text
AI Startups Want to Fix Tech’s Diversity
Eyal Grayevsky has a plan to make Silicon Valley more diverse. Mya Systems, the San Francisco-based artificial intelligence company that he cofounded in 2012, has built its strategy on a single idea: Reduce the influence of humans in recruiting. “We’re taking out bias from the process,” he tells me.
They’re doing this with Mya, an intelligent chatbot that, much like a recruiter, interviews and evaluates job candidates. Grayevsky argues that unlike some recruiters, Mya is programmed to ask objective, performance-based questions and avoid the subconscious judgments that a human might make. When Mya evaluates a candidate’s resume, it doesn’t look at the candidate’s appearance, gender, or name. “We’re stripping all of those components away,” Grayevsky adds.
Though Grayevsky declined to name the companies that use Mya, he says that it’s currently used by several large recruitment agencies, all of which employ the chatbot for “that initial conversation.” It filters applicants against the job’s core requirements, learns more about their educational and professional backgrounds, informs them about the specifics of the role, measures their level of interest, and answers questions on company policies and culture.
Everyone knows that the tech industry has a diversity problem, but attempts to rectify these imbalances have been disappointingly slow. Though some firms have blamed the “pipeline problem,” much of the slowness stems from recruiting. Hiring is an extremely complex, high-volume process, where human recruiters—with their all-too-human biases—ferret out the best candidates for a role. In part, this system is responsible for the uniform tech workforce we have today. But what if you could reinvent hiring—and remove people? A number of startups are building tools and platforms that recruit using artificial intelligence, which they claim will take human bias largely out of the recruitment process.
Another program that seeks to automate the bias out of recruiting is HireVue. Using intelligent video- and text-based software, HireVue predicts the best performers for a job by extracting as many as 25,000 data points from video interviews. Used by companies like Intel, Vodafone, Unilever and Nike, HireVue’s assessments are based on everything from facial expressions to vocabulary; they can even measure such abstract qualities as candidate empathy. HireVue's CTO Loren Larsen says that through HireVue, candidates are “getting the same shot regardless of gender, ethnicity, age, employment gaps, or college attended.” That’s because the tool applies the same process to all applicants, who in the past risked being evaluated by someone whose judgement could change based on mood and circumstance.
3 notes
·
View notes
Text
Drones Will Revolutionize Every Industry They Touch
PrecisionHawk‘s CEO Michael Chasen isn’t afraid to make big statements – or to take big actions. In his keynote address to last week’s InterDrone conference, Chasen said that drones were a transformative technology, and PrecisionHawk was ready for the transformation.
Chasen framed his discussion in terms of iPhones and Roombas (the robotic vacuum cleaner.)
“What I mean by that is are drones a technology that will infiltrate and innovate in every industry they touch – causing us to fundamentally change the way that industry is doing business – as the iPhone did?” asked Chasen. “Or are drones primarily an appliance, good for one or two things but…always a single-use technology.”
It’s clear what Chasen thinks the answer is, and he sees the signs of it as more verticals adopt the technology. “…the real value for smartphones came when people started envisioning different applications for different industries,” says Chasen. “…And I believe that is what we are starting to see here with drones. We can already see how drones are starting to touch various industries.”
Chasen says that PrecisionHawk is making three moves to ensure that they’re ready for that evolution. They’re working on fully integrating their solution, from the best tools for data capture to the right applications and algorithms for analysis. They’re building out a world-class service organization to help new industries with adoption. And, says Chasen, PrecisionHawk is maintaining its leadership position as a company: “We are making sure we are giving back to the community, and we are making sure that we have an open system that is easy for people to work in and extend.”
“I do think that drones are the next iPhone and at PrecisionHawk we are doing everything we can to move forward with that vision,” Chasen said.
After the keynote, DRONELIFE caught up with Chasen to talk more about how PrecisionHawk is gearing up for the next step in the industry’s evolution. The company continues to participate in top-level discussions on regulations; providing experience, use cases and data for decisions. They work with NASA and the FAA to test technologies, and they are continually building and developing their own solutions. Their iconic fixed wing has gone through many generations – but now they offer choices of multi-rotor drones also with their enterprise packages.
Those enterprise packages are expanding fast. As new verticals start to adopt drone technology, PrecisionHawk is moving to meet them with solutions that range from the familiar precision agriculture to energy, construction, insurance, and government. Mining, powerlines, subsea mapping, forestry, solar fault detection, volume measurement, security: the applications are numerous and growing.
Chasen says that the drone industry is enjoying its popularity right now, but that exposure has led to a lot of industry hype. “Everyone is talking about new verticals,” he says. “And there is a lot of hype about future technology, new tools.” But Chasen says that PrecisionHawk’s platform is already providing value for customers. “We don’t want to perpetuate the hype,” says Chasen. “We’re doing real work in these industries.”
3 notes
·
View notes
Text
Google-owned robotics firm unveil 'nightmare-inducing' hybrid robot
Dynamics has released video of its latest creation: a two-wheeled, four-legged hybrid robot named Handle. The robot has been likened to “Terminator riding on a hoverboard”.
Also, Boston Dynamics unveils 'nightmare-inducing' hybrid robot. Google-owned robotics firm and “nightmare” factory Boston Dynamics has released video of its latest creation: a two-wheeled, four-legged hybrid robot named Handle.The robot can stand on four legs, like Boston Dynamics’ previous creations such as BigDog and Spot. But at the end of its back, two legs are two stabilized wheels, which let it stand up vertically and roll around at speeds of up to nine miles per hour. Think “Terminator riding on a hoverboard” and you’ll have a pretty good idea of the impression Handle gives off.Boston Dynamics says the reason for the hybrid design is the simplicity it affords: rather than needing the complex joints of the fully-quadrupedal bots, Handle’s wheels can speed it around with little difficulty, while its front legs can be used for balance and for carrying loads of up to 50kg.“Handle uses many of the same dynamics, balance and mobile manipulation principles found in the quadruped and biped robots we build,” Boston Dynamics said, “but with only about 10 actuated joints, it is significantly less complex. Wheels are efficient on flat surfaces while legs can go almost anywhere: by combining wheels and legs, Handle can have the best of both worlds.” The video does not, however, show Handle walking rather that scooting around on its wheels. The footage of Handle had previously leaked at a presentation from Boston Dynamics founder Marc Raibert to investors when even Raibert described it as a “nightmare-inducing robot”.Google’s parent company Alphabet is reportedly looking to offload Boston Dynamics, following tensions within the company about its subsidiary’s fit within the wider corporate culture. After a previous robotics video was posted to YouTube, Google communications staff sought to distance the company from the hardware, according to emails leaked to Bloomberg News, citing the feeling that such technology could be “terrifying”.
4 notes
·
View notes
Text
HOW LASERS AND A GOGGLE-WEARING PARROT COULD AID FLYING ROBOT DESIGNS PART 1
A barely visible fog hangs in the air in a California laboratory, illuminated by a laser. And through it flies a parrot, outfitted with a pair of tiny, red-tinted goggles to protect its eyes.
As the bird flaps its way through the water particles, its wings generate disruptive waves, tracing patterns that help scientists understand how animals fly.
In a new study, a team of scientists measured and analyzed the particle trails that were produced by the goggle-wearing parrot’s test flights, and showed that previous computer models of wing movement aren’t as accurate as they once thought. This new perspective on flight dynamics could inform future wing designs in autonomous flying robots, according to the study authors.
When animals fly, they create an invisible “footprint” in the air, similar to the wake that a swimmer leaves behind in water. Computer models can interpret these air disturbances to calculate the forces that are required to keep a flyer aloft and propel it forward.
A team of scientists had recently developed a new system that tracked the airflow generated by flight at an unprecedented level of detail. They wanted to compare their improved observations to several commonly used computer models that use wake measurements to estimate flying animals’ lift, to see if their predictions would be on track.
Flight of the parrotlet
For the study, the researchers enlisted the help of a Pacific parrotlet — a type of small parrot — named Obi. Obi was trained to fly between two perches that are positioned about 3 feet (1 meter) apart, through a very fine mist of water droplets, which are illuminated by a laser sheet. The water particles that seeded the air were exceptionally small, “only 1 micron in diameter,” said study author David Lentink, an assistant professor of mechanical engineering at Stanford University in California. (In comparison, the average strand of human hair is about 100 microns thick.)
Obi’s eyes were protected from the laser’s light with custom goggles: a 3D-printed frame that is fitted with lenses cut from human safety glasses — the same type of glasses worn by Lentink and his team.
When the laser flashed on and off — at a rate of 1,000 times per second — the water droplets scattered the laser’s light, and high-speed cameras shooting 1,000 frames per second captured the trails of disturbed particles as Obi fluttered from perch to perch.

3 notes
·
View notes
Text
Robot learns to follow orders like Alexa
Despite what you might see in movies, today’s robots are still very limited in what they can do. They can be great for many repetitive tasks, but their inability to understand the nuances of human language makes them mostly useless for more complicated requests.

For example, if you put a specific tool in a toolbox and ask a robot to “pick it up,” it would be completely lost. Picking it up means being able to see and identify objects, understand commands, recognize that the “it” in question is the tool you put down, go back in time to remember the moment when you put down the tool, and distinguish the tool you put down from other ones of similar shapes and sizes.
Keep reading
187 notes
·
View notes
Text
New robotic drill performs skull surgery 50 times faster
A robotic drill could perform your future surgery – way, way faster than usual.
Researchers from the University of Utah have created an automated machine that can do a complicated cranial surgery 50 times faster than standard procedures. The team’s approach reduces the surgery time from two hours with a hand drill to two-and-a-half minutes.
This specific surgery detailed in the paper – which was published Monday in the journal “Neurosurgical Focus” – is typically used to remove noncancerous tumors in patients with significant hearing loss. But the researchers say it’s a “proof of principle” to show the robot could perform complex procedures that require experience and skill.
The drill produces fast, clean and safe cuts, reducing the time the wound is open and the patient is under anesthesia. This decreases infection, surgical costs and human error, according to the researchers, led by neurosurgeon William Couldwell.
“It’s a time-saving device, more than anything,” Couldwell told CNNTech.
While the use of automation and robotics in surgery has been growing for the past decade – for example, medical robots already can help put screws in the spine or assist in hip replacement surgeries �� Couldwell said this type of technology hasn’t been applied in skull-based surgery.
Here’s how it works: First, a CT scan collects a patient’s bone data and identifies the precise location of sensitive structures like nerves and major veins. Surgeons use the information from the CT scan to program the cutting path of the drill using special software developed by engineers on Couldwell’s team.
“We can program [it] to drill the bone out safely just by using the patient’s CT criteria,” Couldwell said. “It basically machines out the bone.”
The cutting path must avoid a number of sensitive features, such as the venous sinus, which drains blood from the brain. With the team’s approach, the surgeon can program safety barriers along the cutting path within one to two millimeters of these sensitive areas.
A surgeon would stand by during the procedure and can turn off the machine at any time. The drill also has built-in safeguards: for example, it can detect if it’s too close to a facial nerve and will automatically shut off.
Related: Robots hit the streets – and the streets hit back
The drill was first tested on plastic blocks, and then on cadaver skulls. Over the process the team worked on several prototypes to ensure accuracy and to make the final version portable and light enough to move between operating rooms.
The team is now working to commercialize the drill, either alone or by partnering with a medical device manufacturer. Couldwell estimates it will go to market in one to two years.
The device is expected to cost $100,000 or less. That would be quite cost effective over time, Couldwell said, because it’s extremely expensive to run an operating room. “If it saves two and a half hours per case, that’s a significant amount of savings over time,” he said.
The drill can also be applied to other surgical procedures, such as other complex openings of the skull or spine, and as an educational tool, according to the researchers.
“I would like to see it being used in major teaching hospitals because I think it would be a great teaching aid,” Couldwell said.
6 notes
·
View notes
Text
LIVE INTERACTIONS WITH ROBOTS INCREASE THEIR PERCEIVED HUMAN LIKENESS
Constanze Schreiner (University of Koblenz-Landau), Martina Mara (Ars Electronica Futuerlab), and Markus Appel (University of Wurzburg) will present their findings at the 67th Annual Conference of the International Communication Association in San Diego, CA. Using a Roboy robot, participants observed one of three experimental human robot interactons (HRI); either in real life, in virtual reality (VR) on a 3D screen, or on a 2D screen. The scripted HRI between Roboy and the human technician was 4:25 minutes long. During that time, participants saw Roboy assisting the human in organizing appointments, conducting web searches and finding a birthday present for his mom.
The data analyzed revealed that observing a live interaction or alternatively encountering the robot in a VR lead to more perceived realness. Furthermore, the kind of presentation influenced perceived human-likeness. Participants who observed a real HRI reported the highest perceived human-likeness. Particularly interesting is that participants who were introduced to Roboy in VR perceived the robot as less human-like than participants who watched a live HRI, whereas these two groups did not differentiate in regard of perceived realness.
Usually, experimental studies interested in HRI and participants’ evaluations of humanoid service robots - due to limited resources - need to fall back on video stimuli. This is the first study using participants’ evaluations of a humanoid service robot when observed either on a 2D video, in 3D virtual reality, or in real life.
“Many people will have their first encounter with a service robot over the next decade. Service robots are designed to communicate with humans in humanlike ways and assist them in various aspects of their daily routine. Potential areas of application range from hospitals and nursing homes to hotels and the users’ households,” said Schreiner. “To date, however, most people still only know such robots from the Internet or TV and are still skeptical about the idea of sharing their personal lives with robots, especially when it comes to machines of highly human-like appearance.”
“When R2-D2 Hops off the Screen: A Service Robot Encountered in Real Life Appears More Real and Humanlike Than on Video or in VR,” by Constanze Schreiner, Martina Mara, and Markus Appel; to be presented at the 67th Annual International Communication Association Conference, San Diego, CA, 25-29 May 2017.
4 notes
·
View notes
Text
With this 3-D printed "bionic skin" that can sense touch we're one step closer to true humanoids
In recent decades, the world has witnessed humanoid robots rapidly cross over from fantasy into real life. Now, engineers and scientists are one step closer to true humanoids with a 3-D-printed, skin-like fabric that can sense touch.
Scientists at the University of Minnesota essentially created sheets of bionic skin that may end up on robots or humans in the future. The flexible fabric, which can stretch as far as three times its normal size, is constructed in four layers laid down as “ink” from 3-D printers. First, scientists created a silicone base, then added two layers of electrodes that operate as pressure sensors. The last layer, which binds everything together, eventually was stripped away to keep the sensors exposed and sensitive to touch.
Artificial skin has captured the interest of scientists and engineers since the 1970s, who have long hoped to use it on burn victims or diabetes patients who suffer from severe skin ulceration. Interest in 3-D printing has been around for almost as long — 3-D printing was invented by Charles W. Hull in the mid-1980s — but this is the first time that both fields have come together in tandem with with sensors.
“I haven’t heard of anything like [this],” Robert Langer, a professor at the Massachusetts Institute of Technology who led research that developed a polymer-based layer of artificial skin in 2016, said in a phone interview. “The interesting thing is now you can have a smart skin, so to speak, right? This is all speculation, but maybe there could be a skin that can detect things in the environment.”
Most applications are left to the imagination for now, but the fabric’s sensors are sensitive enough to pick up on something as light as a human pulse. Scientists can’t yet print the fabric directly onto a human body, but the good news is that the inks from the 3-D printer already manage to set in a room-temperature environment. Normally, 3-D printing uses hot liquid plastic, which would make it impossible (and downright painful) to print onto human skin.
In the meantime, researchers have at least one other idea on how it can be used.
“Putting this type of ‘bionic skin’ on surgical robots would give surgeons the ability to actually feel during minimally-invasive surgeries, which would make surgery easier, instead of just using cameras like they do now,” Michael McAlpine, the study’s lead researcher, said in a release. “The possibilities for the future are endless.”
3 notes
·
View notes
Text
DOMO SALESFORCE : How Domo works with Salesforce.
Regardless of whether you live and inhale Salesforce consistently, or in case you’re a pioneer who needs bits of knowledge without the quick and dirty points of interest, Domo makes your Salesforce information less demanding to devour and after that put to utilize. Also, Domo encourages you consolidate Salesforce bits of knowledge with information from whatever other source, so you can make quicker, better-educated choices.
Preview authentic information
Disentangle cross-protest announcing
Effective Salesforce detailing choices
Oversee by special case
Consolidation Salesforce information with whatever other framework
5 notes
·
View notes
Text
Domo Advanced Builder Enhances and Extends Salesforce.com Dashboards and Reporting
Because of its advanced feature set and innovative delivery model, Salesforce.com has become the leader in software-as-a-service (SaaS) customer relationship management (CRM) software and has been adopted by many organizations as their de facto solution for storing CRM data, tracking sales activities and creating forecasts.
Domo Advanced Builder simplifies and enhances these capabilities by bringing together Salesforce.com reports and data from other enterprise systems into one intuitive, comprehensive view, and by creating dashboards that track multiple variables without requiring users to rerun multiple reports.
Domo Advanced Builder provides organizations that use Salesforce.com with additional value by providing these benefits: • Viewing data from multiple sources on one dashboard
• Accessing dashboards through Domo Advanced Builder or Salesforce.com
• Querying Salesforce.com data with the query wizard
• Simplifying Salesforce.com reporting
• Improving time to access Salesforce.com data
• Customizing the appearance of information
• Communicating securely between Domo Advanced Builder and Salesforce.com
5 notes
·
View notes
Text
SPOILER ALERT: ARTIFICIAL INTELLIGENCE CAN PREDICT HOW SCENES WILL PLAY OUT
A new artificial intelligence system can take still images and generate short videos that simulate what happens next similar to how humans can visually imagine how a scene will evolve, according to a new study.
Humans intuitively understand how the world works, which makes it easier for people, as opposed to machines, to envision how a scene will play out. But objects in a still image could move and interact in a multitude of different ways, making it very hard for machines to accomplish this feat, the researchers said. But a new, so-called deep-learning system was able to trick humans 20 per cent of the time when compared to real footage.
Researchers at the Massachusetts Institute of Technology (MIT) pitted two neural networks against each other, with one trying to distinguish real videos from machine-generated ones, and the other trying to create videos that were realistic enough to trick the first system.
This kind of setup is known as a “generative adversarial network” (GAN), and competition between the systems results in increasingly realistic videos. When the researchers asked workers on Amazon’s Mechanical Turk crowdsourcing platform to pick which videos were real, the users picked the machine-generated videos over genuine ones 20 percent of the time, the researchers said.
Early stages
Still, budding film directors probably don’t need to be too concerned about machines taking over their jobs yet — the videos were only 1 to 1.5 seconds long and were made at a resolution of 64 x 64 pixels. But the researchers said that the approach could eventually help robots and self-driving cars navigate dynamic environments and interact with humans, or let Facebook automatically tag videos with labels describing what is happening.
“Our algorithm can generate a reasonably realistic video of what it thinks the future will look like, which shows that it understands at some level what is happening in the present,” said Carl Vondrick, a Ph.D. student in MIT’s Computer Science and Artificial Intelligence Laboratory, who led the research. “Our work is an encouraging development in suggesting that computer scientists can imbue machines with much more advanced situational understanding.”
The system is also able to learn unsupervised, the researchers said. This means that the two million videos — equivalent to about a year’s worth of footage — that the system was trained on did not have to be labeled by a human, which dramatically reduces development time and makes it adaptable to new data.
In a study that is due to be presented at the Neural Information Processing Systems (NIPS) conference, which is being held from Dec. 5 to 10 in Barcelona, Spain, the researchers explain how they trained the system using videos of beaches, train stations, hospitals and golf courses.
“In early prototypes, one challenge we discovered was that the model would predict that the background would warp and deform,” Vondrick told Live Science. To overcome this, they tweaked the design so that the system learned separate models for a static background and moving foreground before combining them to produce the video.
AI filmmakers
The MIT team is not the first to attempt to use artificial intelligence to generate video from scratch. But, previous approaches have tended to build video up frame by frame, the researchers said, which allows errors to accumulate at each stage. Instead, the new method processes the entire scene at once — normally 32 frames in one go.
Ian Goodfellow, a research scientist at the nonprofit organization OpenAI, who invented GAN, said that systems doing earlier work in this field were not able to generate both sharp images and motion the way this approach does. However, he added that a new approach that was unveiled by Google’s DeepMind AI research unit last month, called Video Pixel Networks (VPN), is able to produce both sharp images and motion.
“Compared to GANs, VPN are easier to train, but take much longer to generate a video,” he told Live Science. “VPN must generate the video one pixel at a time, while GANs can generate many pixels simultaneously.”
Vondrick also points out that their approach works on more challenging data like videos scraped from the web, whereas VPN was demonstrated on specially designed benchmark training sets of videos depicting bouncing digits or robot arms.
The results are far from perfect, though. Often, objects in the foreground appear larger than they should, and humans can appear in the footage as blurry blobs, the researchers said. Objects can also disappear from a scene and others can appear out of nowhere, they added.
“The computer model starts off knowing nothing about the world. It has to learn what people look like, how objects move and what might happen,” Vondrick said. “The model hasn’t completely learned these things yet. Expanding its ability to understand high-level concepts like objects will dramatically improve the generations.”
Another big challenge moving forward will be to create longer videos, because that will require the system to track more relationships between objects in the scene and for a longer time, according to Vondrick.
“To overcome this, it might be good to add human input to help the system understand elements of the scene that would be difficult for it to learn on its own,” he said.
3 notes
·
View notes