China just dropped a game-changing resource for AI development this week – the Baihu-VTouch dataset, featuring over 60,000 minutes of robot interaction data combining vision AND touch! 🌟 This treasure trove, released through a collaboration between the National and Local Co-built Humanoid Robot Innovation Center and tech partners, could finally help robots 'feel' their way through tasks like humans do.
Why does this matter? 🤔 Until now, most training data focused purely on visual inputs – meaning robots struggled with delicate tasks or working in dim lighting. The new dataset records pressure patterns and surface deformation across 380+ real-world scenarios like:
- 🧺 Household chores (hello future robot butlers!)
- 🏭 Industrial assembly lines
- 🍳 Food service operations
- 🔧 Specialized technical work
The dataset covers 500+ everyday objects and 100+ manipulation skills – from carefully rotating fragile items to precisely inserting components. It’s already being called "the Wikipedia of robotic touch" by developers.
This builds on China’s Hubei Humanoid Robot Center launched last year, which collects 10 million+ data points annually through 23 simulated environments. With 6,000 minutes of Baihu-VTouch already live on the OpenLoong platform, researchers worldwide are geeking out over what’s next. Could tactile-aware robots become mainstream by 2027? The race is on! 🚀
Reference(s):
China releases 60,000-minute vision-and-touch robotics dataset
cgtn.com






