cageymaru
Fully [H]
- Joined
- Apr 10, 2003
- Messages
- 21,723
A team of researchers at NVIDIA have developed a GPU-accelerated reinforcement learning simulator using just one NVIDIA Tesla V100 GPU and a CPU core. The research team was able to train the AI to complete human-like tasks such as running in less than 20 minutes. The simulator can support hundreds of thousands of virtual robots at the same time on a single GPU and the robots were able to learn how to traverse complex and uneven terrain.
"Unlike simulating individual robots on each CPU cores, we load all simulated agents onto the same scene on one GPU, so they can interact and collide with each other," the researchers stated. "The peak GPU simulation frame time per agent for the humanoid environment is less than 0.02ms." "Using FleX, we implement an OpenAI Gymlike interface to perform RL experiments for continuous control locomotion tasks," the team stated. Using the OpenAI Roboschool and the Deepmind Parkour environments the team trained the virtual agents to run toward changing targets, recover from falls, and run on complex and uneven terrains.
"Unlike simulating individual robots on each CPU cores, we load all simulated agents onto the same scene on one GPU, so they can interact and collide with each other," the researchers stated. "The peak GPU simulation frame time per agent for the humanoid environment is less than 0.02ms." "Using FleX, we implement an OpenAI Gymlike interface to perform RL experiments for continuous control locomotion tasks," the team stated. Using the OpenAI Roboschool and the Deepmind Parkour environments the team trained the virtual agents to run toward changing targets, recover from falls, and run on complex and uneven terrains.