本文へジャンプ

成果報告書詳細
管理番号20190000000271
タイトル*平成30年度中間年報 次世代人工知能・ロボット中核技術開発 次世代人工知能技術分野 計算神経科学に基づく脳データ駆動型人工知能の研究開発   
公開日2019/6/19
報告書年度2018 - 2018
委託先名株式会社国際電気通信基礎技術研究所
プロジェクト番号P15009
部署名ロボット・AI部
和文要約
英文要約Title: Development of Core Technologies for Next-Generation AI and Robotics /Next-generation AI technology/Development of brain data-driven artificial intelligence based on computational neuroscience (FY2015-FY2019) FY2018 Annual Report

1. Development of artificial vision system
Bidirectional deep network: To examine the importance of bidirectional connections, we trained a deep convolutional neural network (CNN) without bidirectional connections, using a large-scale database of face images. We examined the consistency between the layer-specific representations of the trained CNN and those observed in Macaque monkey physiology (Freiwald and Tsao, 2009; 2010). There was no CNN neuron that showed good accordance with the monkey’s face neurons in every aspect found in the series of physiological experiments.
Deep image reconstruction: We developed a framework to reconstruct perceptual images from human brain activity, such to include a deep generator network (DGN). When the generated images with DGN and those without DGN were examined by human inspection, the former had higher similarity than the latter to the target images. We further found that integration of neural representations spanning over multiple layers in CNN was more effective than the sole usage of single layer representations.
2. Development of artificial motor control system
Deep imitation learning: We developed a regularization method based on KL divergence from the baseline policy and the policy entropy in the inverse reinforcement learning. The corresponding reinforcement learning showed excellent sample efficiency. We implemented the deep reinforcement learning on a robot, under the collaboration with Dr. Matsubara (NAIST). Our new implementation allowed the robot to perform not only reversing motions of a piece of fabric but also folding motions of T-shirts, with a smaller number of learning trials than by deep Q network (DQN).
Parallel and hierarchical architecture for motor control system: We implemented our real-time reinforcement learning algorithm on a humanoid robot to allow it to perform catch/push motions of a basketball. After performing hot-start such to imitate human catch/push motions taken by motion capture system, the robot attempted to produce continuously catch and push behaviors of the basketball, in the framework of real-time reinforcement learning.
ダウンロード成果報告書データベース(ユーザ登録必須)から、ダウンロードしてください。

▲トップに戻る