Carnegie Mellon University (CMU) scientists have developed a computer model that can directly render text describing physical movements as computer-generated animation, an initial step toward creating movies from scripts.
The Joint Language-to-Pose (JL2P) neural architecture incorporates sentences and physical motions to deduce connections between language and action, gestures, and movement.
CMU's Louis-Philippe Morency said the current focus for JL2P is virtual character animation, but he suggested, "this link between language and gestures could be applied to robots; we might be able to simply tell a personal assistant robot what we want it to do."
JL2P views verbs and adverbs as classifying action and speed/acceleration, with nouns and adjectives describing locations and directions.
View Full Article
Abstracts Copyright © 2019 SmithBucklin, Washington, DC, USA
No entries found