SociBot features face projection technology, heads with projected faces are also available as an option for RoboThespian
The face image is not a pre recorded video, it is generated by blending together 3D models of faces in real time using a special graphical application called InYaFace developed by EA.
The pico LED projection system, is not very bright, about 100 lumens. It will look fine in subdued indoor lighting and look great in the dark.
In strong light the face image can be barely visible, and in direct sunlight you will not see it at all.
EA have brighter projection systems up to 500 lumens under development which do a lot better in strong light conditions but are still not suitable for daylight use.
The LCD screen eyes used on standard RoboThespian work a lot better in strong light conditions.
- Animates facial expressions in real time.
- Applies textures (bitmap images of a face) to a generic gender neutral face model.
- Adds expression modifiers which change the base model mesh shape.
- Adds mesh modifiers for mouth shape to create mouth movement to match speech.
- Adds eyeball texture to eyeball meshes.
- Controls gaze direction and pupil dilation.
- Maps the output of the animated face model to a fixed shape mesh that matches robot hardware.
- Use a geometry mesh of the actual hardware robot face to correct distortion
- Provides controls for projector field of view and position to correct distortion.
- Accepts JSON encoded commands to control animation and projector corrections on the fly.
- Corrects for projector lens colour aberration
Controlling Face Expressions
Creating sequences With Facial Animation
A 'Guise' is a bitmap texture which can be applied to the robot face. This is usually a human character, but does not have to be! A selection of pre-constructed guises are included by default on Socibot and Socibot-Mini. These include a range of ages, races and genders (including a 'gender neutral' face), as well as several more fanciful characters.
A guide to creating guises can be found here >
Eye colour or symbol can be set separately from the overall guise. See 'Understanding Sequences' for instructions on how to set both guise and eye colour using json strings.
Mouth shapes and realistic lipsyncing to Text-To_Speech
Your projected face robot will lipsync with realistic mouth shapes and movements to any Text-to-Speech created with Acapela in Virtual Robot so you don't have to create the mouth movements yourself. You can modify these mouth shapes, or replace with your own in Virtual Robot if desired.
This functionality is currently being developed to work with any Acapela voices installed locally on your robot as well.
Technical details here - Text-to-speech Auto Generated Mouth Positions