University of Bristol Team Designs Dual Arm Robot That Can Learn Bimanual Tasks From Simulation

The Bi-Touch system allows AI agents to interpret its environment and train in simulation within hours.

Yijiong Lin, University of Bristol


Dual-arm robot holding a potato chip.
University of Bristol researchers have developed the Bi-Touch system to enable a robot to quickly learn from its environment and train in a virtual world for manipulation.

An innovative bimanual robot displays tactile sensitivity close to human-level dexterity using artificial intelligence to inform its actions. The new Bi-Touch system, designed by scientists at the University of Bristol and based at the Bristol Robotics Laboratory, allows robots to carry out manual tasks by sensing what to do from a digital helper.

The findings, published in IEEE Robotics and Automation Letters, show how an AI agent interprets its environment through tactile and proprioceptive feedback, and then control the robots' behaviors. The researchers said this enables precise sensing, gentle interaction, and effective object manipulation to accomplish robotic tasks.

They added that this development could revolutionize industries such as fruit picking, domestic service, and eventually recreate touch in artificial limbs.

“With our Bi-Touch system, we can easily train AI agents in a virtual world within a couple of hours to achieve bimanual tasks that are tailored towards the touch, explained Yijiong Lin, lead author and a member of the University of Bristol's Faculty of Engineering. “And more importantly, we can directly apply these agents from the virtual world to the real world without further training.”

“The tactile bimanual agent can solve tasks even under unexpected perturbations and manipulate delicate objects in a gentle way,” he said.

Bimanual manipulation depends on hardware availability

Bimanual manipulation with tactile feedback will be key to human-level robot dexterity, said the University of Bristol researchers. However, this topic is less explored than single-arm settings, partly due to the availability of suitable hardware along with the complexity of designing effective controllers for tasks with relatively large state-action spaces.

The team was able to develop a tactile dual-arm robotic system using recent advances in AI and robotic tactile sensing.

The researchers built up a virtual world in simulation that contained two robot arms equipped with tactile sensors. They then designed reward functions and a goal-update mechanism that could encourage the robot agents to learn to achieve the bimanual tasks. The scientists also developed a real-world tactile dual-arm robot system to which they could directly apply the agent.

The robot learns bimanual skills through Deep Reinforcement Learning (Deep-RL), one of the most advanced techniques in the field of robot learning. It is designed to teach robots to do things by letting them learn from trial and error akin to training a dog with rewards and punishments.

For robotic manipulation, the robot learns to make decisions by attempting various behaviors to achieve designated tasks, such as, lifting up objects without dropping or breaking them. When it succeeds, it gets a reward, and when it fails, it learns what not to do.

With time, it can figure out the best ways to grab things using these rewards and punishments. The AI agent is visually blind, relying only on proprioceptive feedback—a body’s ability to sense movement, action and location and tactile feedback.

University of Bristol team reports results

The Bristol Robotics Laboratory researchers said they were able to successfully enable to the dual-arm robot to successfully safely lift items as fragile as a single Pringle crisp.

“Our Bi-Touch system showcases a promising approach with affordable software and hardware for learning bimanual behaviours with touch in simulation, which can be directly applied to the real world,” said Prof. Nathan Lepora, co-author of the study. “Our developed tactile dual-arm robot simulation allows further research on more different tasks as the code will be open-source, which is ideal for developing other downstream tasks.”

“Our Bi-Touch system allows a tactile dual-arm robot to learn sorely from simulation, and to achieve various manipulation tasks in a gentle way in the real world,” concluded Yijiong concluded:

“And now we can easily train AI agents in a virtual world within a couple of hours to achieve bimanual tasks that are tailored towards the touch.”


Email Sign Up

Get news, papers, media and research delivered
Stay up-to-date with news and resources you need to do your job. Research industry trends, compare companies and get market intelligence every week with Robotics 24/7. Subscribe to our robotics user email newsletter and we'll keep you informed and up-to-date.

Yijiong Lin, University of Bristol

Dual-arm robot holding a potato chip.


Robot Technologies