Ask anyone what image comes to mind when you say the word ‘robot’, and there’s no doubt you’ll receive responses inspired by popular culture. For example, the liquid metal, shape-shifting T-1000 from the movie Terminator 2: Judgment Day (1991); or Optimus Prime, leader of the Autobots and the main character in the Transformers (2014-) films. And who can forget Data from Star Trek: The Next Generation (1987-94), a cybertronic version of Pinocchio, seeking to become more human?
These (and countless other) examples share the humanoid characteristics embedded in their designs. When Optimus isn’t a truck, he has arms and legs. The T-1000’s default form appears as a human. Data was modelled after his very human creator. By Hollywood standards, the ultimate form of robotic technology would be outwardly indistinguishable from humans themselves.
Our imaginations reign free, but technological challenges still limit the creation of robots that perfectly mimic humanoid conventions in the real world. I’ve tried to rise to the task nonetheless. My studies in mechanical engineering allowed me to pursue my quest for designing mechanisms with challenging parameters. One of them was designing a human ocular motor simulator. No, it was not a project to make a Terminator component; rather, it was an effort to understand and simulate the behaviour of the human eye.
This required conceiving of an ocular system that made a saccadic movement – a quick, simultaneous movement of two eyes in the same direction with a peak velocity of more than 500 degrees per second (yes, we humans do that). Like human eyes, the mechanical system would operate through three independent rotational degrees of freedom (DoF). Our eyes not only move up-down and left-right, but also exhibit torsion – twisting movements. To fit all the electrical and mechanical parts, including joints, links and motors, in an integrated system was a challenge. And all for a very defined and singular task.
A few other types of humanoid-inspired robots followed. And while I succeeded in satisfying the immediate goal, whatever it was, my robots had limitations. For example, I designed an 8-DoF robotic hand and 7-DoF arm (together weighing 3.7 kg, comparable to that of humans) that remained flexible enough to grab and throw a baseball, but it could not pick up a coin. It could shake hands with a strong grip but wasn’t able to play a thumb war.
The limbs I was creating were, in short, limited in function. They had a fixed number of joints and actuators, which meant that their functionality and shape were constrained from the moment of their conception. Think about that robotic hand: it had the articulated and motorised joints that allowed it to hit a ball, but it was not suited to make scrambled eggs. But if there were an infinite variety of tasks, would it require an infinite variety of combinations?
The limitless world seen in movies such as Big Hero 6 (2014), featuring microbots, seemed far away when I realised that there was already a flexible and versatile design platform. This method of taking the same basic component and using it to create many distinct and specific forms has been done for ages. It’s called origami.
Who hasn’t made a paper plane, paper boat or paper crane out of one sheet of paper? Origami is an already existing and highly versatile platform for designers. From a single sheet, one can make multiple shapes and, if you don’t like it, you unfold and fold back again. In fact, mathematics has proven that any 3D form can be made from folding 2D surfaces.
Could this be applied to robotic design? Imagine a robotic module that would use polygon shapes to construct multiple different forms to create many robots for many different tasks. Furthermore, picture having an intelligent sheet that could self-fold into any form it wants, depending on the needs of the environment.
I made my first origami robot, which I called a ‘robogami’, about 10 years ago. It was a simple being, a flat-sheeted robot, which could turn into a pyramid and back into a flat sheet, and then into a space shuttle.
My research, conducted with the help of PhD students and a postdoc researcher, has advanced since then, and a new robogami generation is now seeing the light of day. This new generation of robogamis serves a purpose: for example, one of them can navigate through different terrains autonomously. On dry and flat land, it can crawl. If it suddenly meets rough terrain, it will start to roll, activating a different sequence of actuators. Furthermore, if it meets an obstacle, it will simply jump over it! It does this by storing energy in each of its legs, then releasing it and catapulting itself like a slingshot.
They could even attach and detach, depending on the environment and task. Instead of being a single robot that is specifically made for one single task, robogamis are designed and optimised to multitask from scratch.
This is an example of a single robogami. But imagine what many robogamis could do as a group. They could join forces to tackle more complex tasks. Each module, either active or passive, could assemble to create different shapes. And not only that, by controlling the folding joints, they can attack diverse tasks in changing environments. For example, think about outer space where the conditions are unpredictable. A single robotic platform that can transform to do multitasks can increase the mission’s probability of success.
Robogami design owes its drastic geometric reconfigurability to two main scientific breakthroughs. One is its layer-by-layer 2D manufacturing process: multiples of functional layers of the essential robotic components (ie, microcontrollers, sensors, actuators, circuits, and even batteries) are stacked on top of each other. The other is the design translation of typical mechanical linkages into a variety of folding joints (ie, fixed joint, pin joint, planar, and spherical link).
This means that, instead of focusing just on minimising the size of the joint components, we can actually reduce the number of components when designing robots. We can miniaturise systems with numerous components that require complex assembly and calibrations by making them flat; they can be stacked and still maintain their precision.
One such system is a haptic device, in which user and computer interact through a mechanism such as a joystick. These are conventionally used as surgical robots where surgeons require high precision with delicate force feedback. This requires a large operating room with high-DoF robotic arms where surgeons would feel different stiffness of organs and cavities through a motorised interface that would translate the force difference at the tip of a robotic end-effector.
With robogamis, this haptic technology can be more accessible than ever. The robogami haptic interface would be like a foldable joystick that could be fitted to a cellphone cover. Having a haptic interface linked directly to a cellphone means that it can be used as a portable joystick that can react to our daily activities such as online learning or shopping. This would allow you to feel the different organs in a human anatomy atlas, different geographical features on a map, or even the hardness or ripeness of different types of cheese and peaches.
Robotics technology is advancing to be more personalised and adaptive for humans, and this unique species of reconfigurable origami robots shows immense promise. It could become the platform to provide the intuitive, embeddable robotic interface to meet our needs. The robots will no longer look like the characters from the movies. Instead, they will be all around us, continuously adapting their form and function – and we won’t even know it.
This article was originally published at Aeon and has been republished under Creative Commons.