Our work gives an insight into how robotics could, in the future, create hands that function similarly to real biological hands, informs Jernej Barbic, Associate Professor, Computer Science, USC Viterbi School of Engineering in an exclusive interview with Akanki Sharma
How important is the role of prosthetics in the medical field? How has it evolved through these years in India and across the world?
Prosthetics make it possible to replace lost or damaged human limbs, and are thus significant in our society. Most robotic hands today do not attempt to directly mimic real human hand anatomy; instead, they create human hands as robotics devices with rigid parts actuated by servo motors at the joints. However, in real hands, there is a complex interplay between soft tissues (hand muscles, tendons) and bones. Due to this, the real human hand is much more versatile than the robotic hands of today. Our work gives insight into how robotics could, in the future, create hands that function similarly to real biological hands.
What led you to develop this model? What are its key functions and features? Also, elaborate on how it is the most realistic computational model of the human hand in motion.
I have been working in the research areas of computer graphics, simulation and animation for nearly 20 years. I always wanted to model hands as they are an important part of the human body, and therefore, their simulation has many applications. Over the years, I developed many techniques for computational modelling of elastic objects and started applying them to the human body, both the hard tissues (bones) and soft tissues (muscles, fat). At some point, I realised that we can combine the techniques into high-quality models for animation and simulation of the human hand. Our hand model consists of bones, muscles and skin that were measured from a real person’s hand. What differentiates our model from other models is that we acquired our model in multiple hand poses, and not just in one pose. This made it possible to build a computer model for how precisely each bone of the human hand translates and rotates around its parent bone. Similarly, we can build a data-driven model for how the muscles and tendons in the human hand actuate as the hand is articulated. While many researchers have built hand anatomy models earlier, our work models the hand anatomy in motion. The geometric positions and shapes of our bones, muscles and skin are correct not just in one pose, but across multiple poses representing the hand’s range of motion.
Little is understood about the complexity of the hand’s underlying anatomy due to which animating human hands has long been considered one of the most challenging problems in computer graphics. How do you address this issue?
We acquired the internal hand anatomy in multiple poses using an MRI scanner. We performed a separate MRI scan in each pose and then extracted the internal hand anatomy (bones, muscles, fat) in each pose. From this data, we built a computational model for how the bones translate and rotate relative to each other, in three dimensions, inside the human hand. Given a brand new hand motion, we first used the model to translate and rotate all the bones. Thereafter, we ran soft-tissue finite element simulation to compute the motion of the fat and skin, which gave us the final realistic hand appearance. We can simulate any motion of our subject in this way, even if it is quite different from the captured poses. We repeated this procedure for two subjects: one male and one female. The biggest challenge that we addressed in our work was to keep the human hand still in a fixed pose in the MRI scanner. We resolved this by building a tight-fitting rubber mould, one per pose. Each mould is a perfect negative image of the hand in a specific pose. We manufactured it using techniques from the special effects industry in Hollywood. Prior to MRI scanning, the subject inserts his/her hand into this mould which keeps the hand still during scanning. This innovation made it possible to create high-resolution scans of the hand in multiple poses.
What capacity does it hold for medical education in India? Are you in touch with any of the medical colleges or do you plan to do so ?
We can create virtual reality three-dimensional animations of how the bones move inside a human hand as one is articulating it. Sensors track the real hand and detect its motion, including the motion of the fingers, in real-time. We can then display a real-time version of the hand in virtual reality and showcase the skin and the soft tissue (fat) transparently. Therefore, a person will be able to see his/her ‘real’ hand and its internal anatomy in motion similar to, say, undergoing a continuous X-ray of a hand; but without any radiation dangers. We are in touch with the medical school at our university (University of Southern California, in Los Angeles) about the medical education activities.
What benefits can it provide in the fields of robotics, virtual reality and graphic designing?
For virtual reality and computer graphics, it provides highly realistic hands that look like real hands not just in one pose, but across the entire range of motion of the hand. For robotics, it gives an insight into how real biological hands work as they are actuated, and as such make it easier for robotics to replicate this functionality in the future.
What was the cost incurred in creating it, and in what way will researchers take it to the next level?
The cost of creating our two hand models was approximately $10,000. One future avenue for our work is to study how the bone and muscle kinematic models change across multiple subjects, by MRI-scanning a large number of subjects. This may make it possible to build generic computational models across an entire population. Given a new subject, we may then be able to diagnose how different this specific subject is from the general population. We may also be able to identify any defects in the function of the subject’s hand, perhaps simply from a video of the subject’s hand in motion.