Japan has made child robot faces more realistic and it’s weird AF (VIDEOS)

                        <figure>
                <img src="https://cdni.rt.com/files/2018.11/article/5beeae7bfc7e931e7c8b45cb.png">
                <figcaption>© Youtube / Hisashi Ishihara / <span class="copyright">Free</span></figcaption>
            </figure>

                        <strong>Japanese researchers have created an android robot child head that’s more expressive than any before. They have released video showing the impressive, and frankly creepy, results.</strong>

        Researchers from Osaka University have come up with a way to identify and evaluate facial movements in their robot child head, named Affetto. In doing so, they have created a version of Affetto that&rsquo;s far more expressive than the first generation model which was created in 2011.
<figure>

                <div class="media__youtube">            <iframe class="media__youtube-frame" width="560" height="315" src="https://www.youtube.com/embed/EKFc1DEoO6U" frameborder="0" allowfullscreen></iframe>
</div>
        <figcaption></figcaption>
        </figure>

So much so that it’s a little bit weird. The team released three video clips showcasing Affetto’s expressions. He is seen blinking, squinting, laughing, looking confused and scrunching up his face. He gives a lot of sideways glances too and appears to attempt a wink at one stage.  

READ MORE: Saudi Arabia grants citizenship to humanoid robot (VIDEO)
“Our precise findings will let us effectively control android facial movements to introduce more nuanced expressions, such as smiling and frowning,” author Hisashi Ishihara said.

Limited expressions

Robot faces are unable to accurately replicate the complexities of a human’s expressions. This is down to a number of factors, from a lack of suitable material to mimic human skin, to the asymmetric way human faces express emotions. Then there’s the technology required to program these expressions.

                <div class="media__youtube">            <iframe class="media__youtube-frame" width="560" height="315" src="https://www.youtube.com/embed/KbEIv9ONajs" frameborder="0" allowfullscreen></iframe>
</div>
        <figcaption></figcaption>
        </figure>

Surface deformations on the robot faces are a key issue when it comes to controlling them. “Movements of their soft facial skin create instability, and this is a big hardware problem we grapple with,” study author Minoru Asada explained.

“We sought a better way to measure and control it.”

Researching Affetto

The team measured the movement of 116 of Affetto’s facial points. These had deformation units, each one programmed to make a particular facial movement, like raising part of his lip. 

                <div class="media__youtube">            <iframe class="media__youtube-frame" width="560" height="315" src="https://www.youtube.com/embed/IxuluHiwGSk" frameborder="0" allowfullscreen></iframe>
</div>
        <figcaption></figcaption>
        </figure>

They created a mathematical model to quantify their surface motion patterns and were able to use this system to better control Affetto’s fake skin.

READ MORE: Creepy realistic humanoid robot footage sends shivers through Twitter

The team’s progress opens the door for android robots to have more realistic expressions, in the hope of forging deeper interactions with humans.

Think your friends would be interested? Share this story!