November 24, 2022
Researchers on the Electronics and Telecommunications Research Institute (ETRI) in Korea have not too long ago developed a deep learning-based mannequin that might assist to provide partaking nonverbal social behaviors, corresponding to hugging or shaking somebody’s hand, in robots. Their mannequin, offered in a paper pre-published on arXiv, can actively study new context-appropriate social behaviors by observing interactions amongst people.
“Deep studying strategies have produced attention-grabbing ends in areas corresponding to pc imaginative and prescient and natural language understanding,” Woo-Ri Ko, one of many researchers who carried out the research, instructed TechXplore. “We got down to apply deep learning to social robotics, particularly by permitting robots to study social behavior from human-human interactions on their own. Our method requires no prior knowledge of human behavior models, which are usually costly and time-consuming to implement.”
The artificial neural network (ANN)-based structure developed by Ko and his colleagues combines the Seq2Seq (sequence-to-sequence) model introduced by Google researchers in 2014 with generative adversarial networks (GANs). The new structure was skilled on the AIR-Act2Act dataset, a group of 5,000 human-human interactions occurring in 10 totally different situations.
“The proposed neural network architecture consists of an encoder, decoder and discriminator,” Ko defined. “The encoder encodes the current user behavior, the decoder generates the next robot behavior according to the current user and robot behaviors, and the discriminator prevents the decoder from outputting invalid pose sequences when generating long-term behavior.”
The 5,000 interactions included within the AIR-Act2Act dataset have been used to extract greater than 110,000 coaching samples (i.e., quick movies), wherein people carried out particular nonverbal social behaviors whereas interacting with others. The researchers particularly skilled their mannequin to generate 5 nonverbal behaviors for robots, specifically bowing, staring, shaking arms, hugging and blocking their very own face.
Ko and his colleagues evaluated their mannequin for nonverbal social conduct era in a collection of simulations, particularly making use of it to a simulated model of Pepper, a humanoid robot that’s broadly utilized in analysis settings. Their preliminary findings have been promising, as their mannequin efficiently generated the 5 behaviors it was skilled on at acceptable instances throughout simulated interactions with people.
“We showed that it is possible to teach robots different kinds of social behaviors using a deep learning approach,” Ko mentioned. “Our model can also generate more natural behaviors, instead of repeating pre-defined behaviors in the existing rule-based approach. With the robot generating these social behaviors, users will feel that their behavior is understood and emotionally cared for.”
The new mannequin created by this staff of researchers may assist to make social robots extra adaptive and socially responsive, which may in flip enhance the general high quality and circulate of their interactions with human customers. In the long run, it may very well be carried out and examined on a variety of robotic methods, together with house service robots, information robots, supply robots, academic robots, and telepresence robots.
“We now intend to conduct further experiments to test a robot’s ability to exhibit appropriate social behaviors when deployed in the practical world and facing a human; the proposed behavior generator would be tested for its robustness to noisy input data that a robot is likely to acquire,” Ko added. “Moreover, by collecting and learning more interaction data, we plan to extend the number of social behaviors and complex actions that a robot can exhibit.”
Woo-Ri Ko et al, Nonverbal Social Behavior Generation for Social Robots Using End-to-End Learning, arXiv (2022). DOI: 10.48550/arxiv.2211.00930
Ilya Sutskever et al, Sequence to Sequence Learning with Neural Networks, arXiv (2014). DOI: 10.48550/arxiv.1409.3215
Woo-Ri Ko et al, AIR-Act2Act: Human–human interplay dataset for educating non-verbal social behaviors to robots, The International Journal of Robotics Research (2021). DOI: 10.1177/0278364921990671
© 2022 Science X Network
A deep studying mannequin that generates nonverbal social conduct for robots (2022, November 24)
retrieved 24 November 2022
This doc is topic to copyright. Apart from any honest dealing for the aim of personal research or analysis, no
half could also be reproduced with out the written permission. The content material is supplied for info functions solely.