#Sim2Real
Explore tagged Tumblr posts
Text
youtube
China’s humanoid robot turns into Kung-fu master after dazzling dance debut
Robots train in a virtual setting, learn human actions, then transfer skills to real robots using Sim2Real technology.
Unitree’s latest video shows its compact humanoid robot mastering kung-fu moves, including punches and roundhouse kicks, while maintaining balance. The Chinese company is in the process of constantly upgrading G1’s algorithm, enabling it to learn and perform virtually any movement. The most recent update seems to have expanded the humanoid’s balance capabilities and repertoire of movements.
In a video released last week, G1 showcased agility and balance, performing smooth dances, precise footwork, while being interrupted.
In January, Unitree showcased its G1 robot’s smooth walking and running system, demonstrating agility, stability, and precise control on inclines and uneven terrain.
Humanoid robot’s agile moves
In its most recent video, Unitree’s G1 humanoid performs amazing kung-fu movements with amazing balance, albeit it almost trips once. The new algorithm upgrade that powers these nimble movements improves the robot’s capacity to learn and carry out intricate tasks. The G1 exhibits remarkable coordination and flexibility with 23 degrees of freedom (DoF) and enhanced stability. Notwithstanding this demonstration of martial arts expertise, the film concludes with a warning that users should not alter the robot in any way that could endanger others or teach it actual combat techniques. Unitree claims that the G1 is made to handle difficult, filthy, and repetitive jobs in a variety of settings, including at homes, factories, and hospitals. It embodies Unitree’s concept of humanoid robots serving as useful work and life companions.
youtube
Training begins in a virtual setting using Nvidia’s Isaac Simulator, allowing the robots to learn complex behaviors before they physically exist. This method entails building a digital twin of the humanoid robot that uses motion capture and video data to observe human actions. These behaviors are exercised in the virtual environment through reinforcement learning. Through a procedure known as Sim2Real, which smoothly converts simulated actions into real-world applications, the acquired abilities are subsequently transmitted to the physical robot.
Open-source motion
To improve the natural movement of its humanoid robots, such as the H1, H1-2, and G1 models, Unitree Robotics has released an open-source full-body dataset. This information enhances the robots’ flexibility and coordination, allowing them to mimic human-like movements – like dancing. Captured using LAFAN1 motion capture technology, the dataset is fully compatible with Unitree’s main robot models and showcases lifelike movements in a newly released demonstration video. The dataset features a redirection algorithm that combines interactive mesh and inverse kinematics technology. By taking into account end posture restrictions, joint positions, and velocity limitations, this novel method maximizes the robots’ movements.
RECOMMENDED ARTICLES
Unitree hopes to enable researchers and enthusiasts to investigate novel uses for humanoid robots in diverse real-world contexts by making this dataset publicly accessible.
youtube
High-performance H1 and H1-2 models from Unitree are built for advanced capabilities; as of March 2024, the H1 held the Guinness World Record for sprinting at 7.38 mph (3.3 m/s). The G1 model, on the other hand, provides a more portable and reasonably priced choice for research and development.
According to the firm, it aims to promote exploration and creativity through this open-source project, expanding the possibilities for more dynamic, human-like interactions and pushing the limits of humanoid robotics.
Source: https://interestingengineering.com
#robots#robotics#Sim2Real technology#Sim2Real#china#chinese technology#Chinese robots#Kung Fu Bot#Unitree G1#Unitree#new technology#tech news#Youtube
0 notes
Link
Developing effective locomotion policies for quadrupeds poses significant challenges in robotics ...
0 notes
Text

Horseshit take
There's literally 7 big name tech companies in the US trying to make humanoid robots for these kinds of tasks
Physical intelligence
Agility Robotics
Tesla
Mattic robototics
Appronik
Boston Dynamics
Neura
Guess what the issues are
1. Task generalization
2. Sim2real
We're trying 👹👹👹👹👹👹👹👹👹
Theres a new technique for robot learning with diffusion so ig it adds a little bit of adaptability but still
These robots
Cannot generalize. They can only do the tasks theyre trained for. This is horrible. You give them a meat cutting knife instead of a butter knife but a slab of meat looks too different from a slab of butter and the cutting technique is different (example. Its not quite this simple)
That poor bot is cooked
And we're trying WE'RE TRYING
You have no idea how much easier LLMs are to make
#dont talk about ai im mad#think of the poor idiot researchers (me) trying to make this happen#ohhhh capitalism evil propaganda propaganda#cmon. just talk to one of us#im more than happy to#no just kidding im under NDA#im more than happy to refer you to my brother who knows who isnt under NDA#sorry op that i screenshotted youll be my excuse to rant about how nobody gets ai#just for reference tho. cornell hosts something called arxiv where you can find m#you can find papers and paper manuscripts. and its free. completely free to look at#if anyone has ever wanted to make their life hell and read some papers for fun#its tumblrs fault for putting this on my for u#for the record in terms of personal politics. i dont think ai is trying to replace creatives. i think its trying to replace everyone#you havent seen how much of a rush there is to get warehouse automation ai infrastructure in#i think we're seeing people use it to try to replace creatives first because most people are inherently not creative#ive been debating with myself. would people get bored eventually and return to base (real creatives) or will it kinda be like mcdonalds.#i think both? i think both. i have no one to discuss this with#yknow colleges always push that engineers are some sort of moral-less ethics-less demons so they make us take required humanities classes#but in none of those classes do they ever answer the question why humanity tends to gravitate towards slavery#avo if they answered that people will think critically about communism so thay cant happen#weve got like 2 different kinds of pro slavery advocates rn and lemme tell you none of those people are engineers.#i dont think its our ethics abd morals you should be concerned about
1 note
·
View note
Text
2024-04-01 / 遠方より元同僚来たる
前職の同僚さんがベイエリアに出張に来るということで声をかけてくれたので、自分の職場近くでランチに行ってきた。こういう機会に声をかけてくれるのはありがたいことです。
彼はスタートアップに���身してそこでCTOをやっているのだけれど、今回の出張の目的であるパートナーシップのことなど色々と話を聞いていると、ダイナミックで楽しそうだなぁと思った。自分もそろそろ、新しい一歩を踏み出すべくアンテナを高くしてかないとなぁ。
そういえば先日、Nvidiaの本社脇を車で通りかかった際にふと「あ、GTCがあったっけ」と思い出したのでキーノートを観ていた。
GPUの製造とソフトウェアツール群の開発以外にも色々とやっていることは知っていたけれど正直ちゃんと調べたことはなかったが、実際色々とやっていて面白い企業だなと思った。だいぶ推している Omniverse だけど、所謂Sim2Realでの課題(シミュレータで訓練した機械学習モデルを実世界に適用したら期待通りのパフォーマンスが出ない)は問題にならないのだろうか。DigitalTwin上でのシミュレーションをまず行い、そのあとに物理がくるという話。ある程度までの複雑さに収まるドメインなら十分に役に立つのだろうなとは思うが、キーノートの後半ではロボティクスでの適用についても多々触れられていて、やっぱりSim2Realの課題という点が気になった。実世界での適用時にはそこでファインチューニングを少しやってあげるのかな、それとも自分の認識を大きく超える精度に既に世界は到達しているのか。
0 notes
Text
Ambi deploying parcel sorting robots at OSM warehouses
youtube
Ambi Robotics is deploying its parcel-sorting robots at OSM Worldwide’s warehouses in the U.S. Based on the minimum four-year robots-as-a-service (RAAS) deal, the flagship AmbiSort A-Series system will be installed at OSM warehouses in Atlanta, Chicago, and Las Vegas.
The AmbiSort A-Series is a configurable robotic sorting system that uses machine learning to adapt to mixed parcels like polybags, flats and boxes into last-mile mailsacks. The systems are modular and configurable to accept parcels via rolling bin or the new conveyor-fed automated induction system.
AmbiSort is powered by AmbiOS, the company’s proprietary operating system that leverages simulation-to-reality (Sim2Real) artificial intelligence (AI). AmbiOS is based on The Dexterity Network (Dex-Net) project that was developed at UC Berkeley to automate the training of deep neural networks to improve a robot’s ability to grasp various items. Many of Dex-Net’s developers are now working at Ambi Robotics.
According to Ambi Robotics, AmbiSort systems are first designed and trained in simulation, which speeds up training 10,000x faster than teaching algorithms in the physical world.
“At OSM Worldwide, we are always looking for ways to improve our sorting and delivery operations, and we’re excited to partner with Ambi Robotics to empower our workforce with cutting-edge technology across our warehouses,” said James Kelley, president at OSM Worldwide. “With the AmbiSort A-Series systems, we can improve order accuracy and speed to our ecommerce customers while improving efficiency and safety for our warehouse employees amid rising parcel demand.”
Ambi Robotics’ AmbiSort parcel sortation system. | Credit: Ambi Robotics
Ambi Robotics raised a $32 million funding round in October 2022. The company has now raised about $67 million since it was founded in 2018. It closed a $26 million Series A in September 2021.
Ambi said it deployed an additional 60 robots to its U.S. customer base in under 60 days ahead of the 2022 peak holiday season. It said its robotic sorting systems are being used in more than 13 cities across the U.S. Pitney Bowes, a global shipping and mailing company, is another high-profile customer for Ambi. The company’s recently signed a $23 million expansion deal that would bring AmbiSort systems to additional warehouse locations.
Jeff Mahler, co-founder and CTO of Ambi Robotics, will be speaking at the Robotics Summit & Expo, which takes place May 10-11 in Boston. Mahler will be on the panel “Innovation in Robotic Grasping” to discuss emerging approaches to robotic manipulation, including the work being done at Ambi.
The post Ambi deploying parcel sorting robots at OSM warehouses appeared first on The Robot Report.
0 notes
Video
youtube
Narrowing the Sim2Real Gap with NVIDIA Isaac Sim
0 notes
Text
Watch Google’s ping pong robot pull off a 340-hit rally
Watch Google’s ping pong robot pull off a 340-hit rally
As if it weren’t enough to have AI tanning humanity’s hide (figuratively for now) at every board game in existence, Google AI has got one working to destroy us all at ping pong as well. For now they emphasize it’s “cooperative” but at the rate these things improve, it will be taking on pros in no time. The project, called i-Sim2Real, isn’t just about ping pong but rather about building a robotic…
View On WordPress
0 notes
Link
Developing effective locomotion policies for quadrupeds poses significant challenges in robotics ...
0 notes
Link
In June 2019, Facebook’s AI lab, FAIR, released AI Habitat, a new simulation platform for training AI agents. It allowed agents to explore various realistic virtual environments, like a furnished apartment or cubicle-filled office. The AI could then be ported into a robot, which would gain the smarts to navigate through the real world without crashing.
In the year since, FAIR has rapidly pushed the boundaries of its work on “embodied AI.” In a blog post today, the lab has announced three additional milestones reached: two new algorithms that allow an agent to quickly create and remember a map of the spaces it navigates, and the addition of sound on the platform to train the agents to hear.
The algorithms build on FAIR’s work in January of this year, when an agent was trained in Habitat to navigate unfamiliar environments without a map. Using just a depth-sensing camera, GPS, and compass data, it learned to enter a space much as a human would, and find the shortest possible path to its destination without wrong turns, backtracking, or exploration.
The first of these new algorithms can now build a map of the space at the same time, allowing it to remember the environment and navigate through it faster if it returns. The second improves the agent’s ability to map the space without needing to visit every part of it. Having been trained on enough virtual environments, it is able to anticipate certain features in a new one; it can know, for example, that there is likely to be empty floor space behind a kitchen island without navigating to the other side to look. Once again, this ultimately allows the agent to move through an environment faster.
Finally, the lab also created SoundSpaces, a sound-rendering tool that allows researchers to add highly realistic acoustics to any given Habitat environment. It could render the sounds produced by hitting different pieces of furniture, or the sounds of heels versus sneakers on a floor. The addition gives Habitat the ability to train agents on tasks that require both visual and auditory sensing, like “Get my ringing phone” or “Open the door where the person is knocking.”
Of the three developments, the addition of sound training is most exciting, says Ani Kembhavi, a robotics researcher at the Allen Institute for Artificial Intelligence, who was not involved in the work. Similar research in the past has focused more on giving agents the ability to see or to respond to text commands. “Adding audio is an essential and exciting next step,” he says. “I see many different tasks where audio inputs would be very useful.” The combination of vision and sound in particular is “an underexplored research area,” says Pieter Abeel, the director of the Robot Learning Lab at University of California, Berkeley.
Each of these developments, FAIR’s researchers say, brings the lab incrementally closer to achieving intelligent robotic assistants. The goal is for such companions to be able to move about nimbly and perform sophisticated tasks like cooking.
But it will be a long time before we can let robot assistants loose in the kitchen. One of the many hurdles FAIR will need to overcome: bringing all the virtual training to bear in the physical world, a process known as “sim2real” transfer. When the researchers initially tested their virtually trained algorithms in physical robots, the process didn’t go so well.
Moving forward, the FAIR researchers hope to start adding interaction capabilities into Habitat as well. “Let’s say I’m an agent,” says Kristen Grauman, a research scientist at FAIR and a computer science professor at the University of Texas, Austin, who led some of the work. “I walk in and I see these objects. What can I do with them? Where would I go if I’m supposed to make a soufflé? What tools would I pick up? These kinds of interactions and even manipulation-based changes to the environment would bring this kind of work to another level. That’s something we’re actively pursuing.”
0 notes
Photo
"[R] Sim2Real – Using Simulation to Train Real-Life Grasping Robots"- Detail: Hey, I published a summary of RCAN, a new robotics paper from X/Google/Deepmind which achieves state-of-the-art results in robotic grasping with very little training data. The main idea is that instead of training the grasping robot on full-resolution images of object grasping you train it on a simplified version (canonical style) of the grasps. Personally, I think it's one of the most interesting papers of 2018 and it's also cool that it combines GANs, Reinforcement Learning, and Computer Vision. Full summary here: http://bit.ly/2Th9qXS. Caption by tldrtldreverything. Posted By: www.eurekaking.com
0 notes