Researchers at the University of California, Berkeley and Google have released a report on lessons learned from years of experiments in training robots with deep reinforcement learning (RL).
The researchers found that robotics does not abide by the most basic assumptions in the RL paradigm.
They said latency in robotics "violates the most fundamental assumption of [the Markov Decision Process], and thus can cause failure to some RL algorithms."
The researchers also detail the real-world challenges associated with goals and rewards.
Said researcher Sergey Levine, "In a game like chess or Go, the RL policy will only be as good as the 'simulator' that it inhabits," but in the real world, "a robot can experience many of the same things that we experience ... and maybe even learn things that might surprise us."
From ZDNet
View Full Article
Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA
No entries found