In recent years, researchers have achieved great success in guaranteeing safety in human-robot interaction, yielding a new generation of robots that can work with humans in close proximity, known as collaborative robots (cobots). However, due to the lack of ability to understand and coordinate with their human partners, the ``co'' in most cobots still refers to ``coexistence'' rather than ``collaboration''. This thesis aims to develop an adaptive learning and control framework with a novel physical and data-driven approach towards a real collaborative robot.
The first part focuses on online human motion prediction. A comprehensive study on various motion prediction techniques is presented, including their scope of application, accuracy in different time scales, and implementation complexity. Based on this study, a hybrid approach that combines physically well-understood models with data-driven learning techniques is proposed and validated through a motion data set.
The second part addresses interaction control in human-robot collaboration. An adaptive impedance control scheme with human reference estimation is presented. Reinforcement learning is used to find optimal control parameters to minimize a task-orient cost function without fully knowing the system dynamic.
The proposed framework is experimentally validated through two benchmark applications for human-robot collaboration: object handover and cooperative object handling. Results show that the robot can provide reliable online human motion prediction, react early to human motion variation, make proactive contributions to physical collaborations, and behave compliantly in response to human forces.