Difference between revisions of "Virtual Robot"

From Engineered Arts Wiki
Jump to navigation Jump to search
(Created page with "Category:Virtual Robot =='''Introduction'''== Virtual RoboThespian provides an integrated environment in which to program RoboThespian for content delivery. Choose from...")
 
(Introduction)
Line 3: Line 3:
 
=='''Introduction'''==
 
=='''Introduction'''==
  
Virtual RoboThespian provides an integrated environment in which to program RoboThespian for content delivery. Choose from preinstalled poses and routines, or build your own using our poseable and dynamically integrated robot model, sophisticated text-to-speech software, and intuitive timeline editing.
+
Virtual Robot provides a simple to use web based tool for content creation on EA robots like RoboThespian and Socibot.<br>
 +
Choose from preinstalled poses and routines, or build your own using our poseable and dynamically integrated robot model, sophisticated text-to-speech software, and intuitive timeline editing.
  
Log in to Virtual RoboThespian with your username and password here - http://virtual.robothespian.co.uk If you do not have a username and password and would like access to Virtual RoboThespian please contact Engineered Arts.
+
Log in to Virtual Robot with your username and password here - http://virtual.robothespian.co.uk If you do not have a username and password and would like access to Virtual Robot please contact Engineered Arts.
  
 
[[File:VRScreen_layout.jpg | 900px]]
 
[[File:VRScreen_layout.jpg | 900px]]
  
  
# Menu Bar - file controls, QR Code generator and login details.
+
# Menu Bar - file controls and login details.
# Staging area - control and pose RoboThespian using your mouse.
+
# Staging area - control and pose robots using your mouse.
 
# Library - comes packed with pre-installed content for you to drag and drop into the timeline.
 
# Library - comes packed with pre-installed content for you to drag and drop into the timeline.
 
# Timeline - arrange poses, audio components and speech to create your own robot performance.
 
# Timeline - arrange poses, audio components and speech to create your own robot performance.

Revision as of 11:50, 19 March 2016


Introduction

Virtual Robot provides a simple to use web based tool for content creation on EA robots like RoboThespian and Socibot.
Choose from preinstalled poses and routines, or build your own using our poseable and dynamically integrated robot model, sophisticated text-to-speech software, and intuitive timeline editing.

Log in to Virtual Robot with your username and password here - http://virtual.robothespian.co.uk If you do not have a username and password and would like access to Virtual Robot please contact Engineered Arts.

VRScreen layout.jpg


  1. Menu Bar - file controls and login details.
  2. Staging area - control and pose robots using your mouse.
  3. Library - comes packed with pre-installed content for you to drag and drop into the timeline.
  4. Timeline - arrange poses, audio components and speech to create your own robot performance.
  5. Inspector - real-time readouts of actuator data and view key frame data.

Technical requirements

Virtual RoboThespian is designed to be used on any platform. That said, there are some hardware configurations which can affect performance; if you are having problems please check this list and try to ensure whenever possible that your system meets specifications.

Devices: Virtual RoboThespian runs best in a personal PC or laptop. Many mobile devices do not support HTML5, and therefore will not correctly render the virtual environment, nor will all the pose editing tools be available on a phone or tablet. If using a laptop, we recommend connecting a mouse, as it is difficult to position the robot with precision on a touchpad, and some touchpads do not have scrolling functionality (which is required to zoom the display).

Browser: Virtual RoboThespian will only work in HTML5-enabled browsers. For best results, we recommend using Chrome, with FireFox as second choice. Internet Explorer does not fully support WebGL and requires 3rd party plugins; we do not endorse the use of Virtual RoboThespian with IE at this time.

Hardware/Drivers: Any 3D Graphics card should be sufficient to run Virtual RoboThespian seamlessly on your computer, however WebGL performance will depend on the integration of your browser with the chosen OS. If you find you are having problems with the display (a list of known issues is included in the Troubleshooting section), we recommend checking the following link to see if your combination of OS, browser and device driver fully supports WebGL: https://www.khronos.org/webgl/wiki/BlacklistsAndWhitelists


Interface and Environment

The Virtual Robot interface is divided into several sub areas, as shown in the screenshot in the introduction. Here we provide a detailed guide to how these various elements combine to create an intuitive interface for sequencing a robot.


Menu Bar

File & Login options are located in a menu bar, at the top of the browser window.

VRScreen fileOptions.jpg


  1. File buttons - New blank timeline, Save the current performance, Copy the current performance, Delete the current performance.
  2. File List drop down menu - Type in here to give a name to new performances, or use your mouse to select an existing performance from the drop-down menu
  3. File History drop down menu - This will list all previous saves to the current file selection.
  4. QR code button - Click to generate a QR code for your specific performance. If you store the image, you can use it to trigger your performance by holding it in the visual field of a real robot.


Staging area

This is the main arena where you can view the robot, watch a sequence play out, or create new poses. Click and drag anywhere in the staging area except on the robot to change the camera angle. Use the mouse scroll wheel (or equivalent laptop function) to zoom in and out. You can create poses by moving the robot parts (limbs, fingers, head, etc) to the desired position and pressing 'add pose'. The top menu contains some useful options for changing the eye animations and body LED colours. For a detailed guide to pose editing, see posing the robot, below. Double clicking on the robot torso or legs will open the Routines tab in the library. Double clicking on the head will bring up the Head sequences tab, on an arm will bring up the Pose tab, and on the jaw will open the Speech tab.

VRScreen staging.jpg


  1. Mirror Eyes Button - Toggle this on/off to control eyes individually or with mirroring function.
  2. Eye Graphics Drop Down Menu - Select animatable or static eye graphics from the list.
  3. Light Colour Selector - Click on the robots Head, Torso, Arms and Legs (they will highlight blue), then click the drop down button.
  4. Colour Cursor - Move this point around to select you desired colour for the selected body part.
  5. Slider - Adjusts the light brightness for the selected body part.
  6. Robot - Click on him to move and select for light colours.


Library

VRScreen library header.jpg


The library window allows you to access stored data to use in timeline sequences. When dropped into the timeline, the length of the audio or motion clip will be reflected in the component length as displayed in the timeline. Currently grouped into the following sublibraries:

Pose

Useful static poses and short gestures (eg beckon).

Head

Head poses and movements.

Eyes

Eye positions and expressions, and a blink function.

Colour

As with the cheeks, some useful colours to be used in the body/arms/pelvis LEDs.

Cheeks

Some useful precomposed cheek colours for the LEDs in the face.

Routines

Longer routines created for real RoboThespian installations, which are entertaining or useful in a general setting.

Audio

Stored audio segments. These can be sound snippets, music, a recorded speech segment - anything you like.

To upload a new audio file. Open folder containing sound file, drag and drop into timeline.

Speech

Text to speech input. Key below:

VRScreen speech tab.jpg


  1. Language
  2. Voice selection (number of voices available depends upon language).
  3. Voice style - some voices (especially US English) have optional speech modulations, eg mood, age, character.
  4. Text input - enter your desired speech here, and press 'Add Speech' at the bottom right to insert into the timeline.
  5. Volume - adjust the volume of the tts component.
  6. Speed - adjust the speed.
  7. Shaping - how much inflection is given to the tts component.
  8. Lypsync Gain - Control the range of motion for the jaw
  9. Append & Add at Marker - Add tts component to the speech track.
  10. Expand & Contract the Library Area.


Like the pre-composed elements, tts segments will reflect their true length when dropped into the timeline, however it may take a few moments for the engine to calculate the length.

Timeline

The timeline lets you view, play and edit complete routines for the Virtual (and real!) RoboThespian.

VRScreen timeline menu.jpg


VRScreen timeline tracks.jpg


VRScreen animation toolbox.jpg


  1. Animation Toolbox - Open/Close the floating animation toolbox
  2. Copy and Paste - Select an item to copy, and paste at current marker position
  3. Green Timeline Marker & Marker Time display
  4. Undo / Redo (Unlimited - currently only works for toolbox actions)
  5. Item Mode / Frames Mode (Current mode highlighted in orange. Each mode offers different ways to edit)
  6. Play/Pause & Restart your performance. Restart plays from the beginning
  7. Reset - Resets the virtual robot to a default start pose
  8. Minimise Timeline area
  9. Scrollbar - Drag to navigate the timeline (Dragging the ends will compress or extend the timeline)
  10. Current time in seconds of green marker position
  11. Track Type Names
  12. Individual Track Show/Hide Toggles
  13. Motion, Audio, Speech, and Lipsync Tracks (Item edit mode with keys toggled to SHOW, audio and TTS showing file names)
  14. Motion, Audio, Speech, and Lipsync Tracks (Item edit mode with keys toggled to HIDE, audio and TTS showing wave forms and linear interpolation between keys)
  15. Motion Track (with Keyframe Edit Mode #5 toggled on)
  16. The Animation Toolbox (individual buttons described in Animation Toolbox )


Drag Library elements into the timeline area where you wish to insert them. You can drag elements to reorder them (although note that routines containing both sound and movement have these inextricably linked). Audio files such as your own mp3 files can be dragged directly into the timeline area. Audio files can also be added from the audio tab in the library. Text to Speech (tts) items are created at the markers current position or appended to the last one in the timeline if it exists. A lipsync data item is automatically created and is linked to the tts items. If a tts item is deleted, the linked lipsync data item will remain and will also need to be deleted if not required.

The VR interface should not allow placing of any elements (b) before zero seconds in the timeline (a). If this were to happen it will stop the sequence playing on a real RoboThespian.

Element before zero seconds annotated.png

Animation Toolbox

The animation toolbox provides a selection of tools (upper row) and body part filters (lower row). its a floating toolbox, so can be moved around the browser and placed anywhere you like. Tools are ordered from Left to Right. Keyboard short cuts for each button display when hovering the mouse over the button.


VRScreen animation toolbox sml.jpg

Upper Row Buttons
  • Add Key Pose - Adds current robot pose at current marker position based on filter selection.
  • Interpolation Type - Chose between Stepped/Linear/Spline interpolation types.
  • Apply Interpolation - Apply the selected interpolation to the currently selected timeline item.
  • Remove Key Pose - Removes the key pose at current marker position based on filter selection.
  • Remove All Key Poses - Removes all keys poses in selected timeline item based on filter selection.
  • Add single Input - Create an input to trigger external devices, such as a sequence on another robot.
  • Slice - Slice selected item at current marker position.
  • Weld - Weld selected item at current marker position.


Lower Row Buttons
  • Select All/None
  • Eyes
  • Head
  • Body
  • Right Arm
  • Left Arm
  • Hands
  • Lights
  • Play Inputs


The body part filters are toggle buttons. If you wish to only add or remove keys for the body, then only toggle the body selection, and so on. The filters are useful for creating animation in passes or for isolating problems with movement.

Item Selection

Items on the timeline (motion data, audio & TTS clips) can be selected by LMB clicking on them. Whilst an item is selected, holding LMB and dragging left and right, allows an item to be moved. Multiple items can be selected and manipulated by LMB clicking to select, holding CTRL to add to selection, and holding CTRL + SHIFT to add range in a single track.


Interpolation Types

Individual frames are distinguished by a fine coloured grating within the timeline element. They are not necessarily evenly spaced, as they correspond to event times rather than clock cycles or animation frames. The type of interpolation between them is represented by either a blank space for stepped and a faded but constant grating for linear.

VRScreen interpolation stepped.jpg

Stepped interpolation (blank space between key poses) means the key pose value remains unchanged until the marker plays the next value along the timeline. This will result in a jumpy playback effect but serves well when planning out key poses.


VRScreen interpolation linear.jpg

Linear interpolation (faded grated colours) means the values from one key pose to the next is averaged out over the time between them. The result is an smooth movement between key poses with an equal spacing between each frame.

Linear interpolation updates automatically when adding key poses between existing keys poses. You can also change the interpolation type, by selecting the element you wish to change, then selecting the interpolation type you require from the animation toolbox drop down list, and clicking the interpolate button next to the drop down list.

Inspector

The right slideable window gives a real-time list of all hardware outputs.

Posing the robot

Most robot elements can be controlled with the mouse alone. The following is a list of parts, their controllable aspects or axes, and how to move each one.

Robot Part Controllable feature How to edit
Eyes Iris position Mouse: left click anywhere in the eyeball and hold the button down to drag the eyes to an appropriate position.
Eyes Pupil size Mouse+ctrl: left click on the iris or pupil while holding down the ctrl key on your keyboard. Dragging the mouse will increase or decrease pupil size.
Eyes Graphics A drop-down menu on the upper left of the pose screen lists graphical display options for the eyes. Be warned that not all of these will animate or have editable features.
Eyelids Vertical position Mouse: left click on the upper or lower eyelid and drag to position.
Eyelids Tilt Mouse+ctrl: left click on the upper or lower eyelid while holding down the ctrl key on your keyboard. Dragging the mouse will tilt the robot's eyelids up or down.
Eyes + Eyelids Simultaneous editing To the left of the dropdown graphics menu, there is a 'mirror eyes' toggle. This is on by default, and means that edits to one eye will be reflected in the other. To change an individual eye or eyelid, without affecting the other, toggle this option off.
Head Yaw and pitch position Mouse: left click and drag with the mouse to control yaw and pitch of the head. Lateral (x) mouse position controls yaw, vertical (y) controls the pitch.
Head Roll position Mouse+ctrl: left click on the head while holding down the ctrl key on your keyboard to edit the head rotation about the roll axis only. The other axes will be disabled when ctrl is held.
Head Jaw Mouse: Left click and drag the mouse to open or close the robot's jaw. This is a binary mode (open/closed) without interim positioning.
Head Face LEDs Drop-down colour menu/brightness slider: Click briefly on the head (it should flash blue to indicate it is selected). Using the drop down colour map on the upper menu bar, and the brightness slider tool, you can control the LED output for the robot cheeks.
Upper arm/shoulder Yaw and pitch position Mouse: left click on the upper arm or shoulder joint and drag to rotate the arm yaw or pitch. Lateral (x) mouse position controls the yaw (abduction/adduction of the arm), vertical (y) mouse position controls the pitch (raising or lowering the joint). For best and most precise results, use the shoulder joint for pitch control and the centre of the joint (around the bicep) for yaw control. Note that controlling yaw from the shoulder can cause overlap difficulties with body roll angle.
Upper arm/shoulder Roll position Mouse+ctrl: left click on the upper arm while holding down the ctrl key to control arm roll (pronation/supination).
Upper arm Arm LEDs Drop-down colour menu/brightness slider: Click briefly on the arm (it should flash blue to indicate it is selected). Using the drop down colour map on the upper menu bar, and the brightness slider tool, you can control the LED output for the arms separately.
Elbow Pitch position Mouse: left click on the forearm and drag along the vertical axis to control elbow pitch. Note that the elbow has only one true degree of freedom, however you can control the roll axis of the wrist (see Wrist/Roll) and the yaw axis of the shoulder from the forearm.
Wrist Roll position Mouse+ctrl: left click on the forearm or hand to control the roll axis of the wrist/hand.
Hand Pitch position Mouse: left click on the hand to control wrist pitch. Note that there is no yaw degree of freedom in the wrist on this RoboThespian model - lateral and vertical mouse movement both control wrist pitch only. Depending on the overall body pose, it may be more advantageous to switch between vertical or lateral motion to achieve the full sensitivity and precision.
Hand Finger flexion Mouse: Click on a finger to flex or extend it. Each finger has only one degree of freedom with two possible positions (fully extended or fully curved).
Torso Roll and pitch position Mouse: left clicking on the torso and dragging the mouse will change the pitch and roll of the robot torso. Lateral (x) mouse position controls the roll, vertical (y) mouse position controls the pitch. Note that these controls are sensitive to the yaw position of the upper body and for best results, rotate the camera frame so the robot is facing you.
Torso Yaw position Mouse+ctrl: left click on the torso while holding down the ctrl key to control yaw rotation of the upper body.
Torso Body LEDs Drop-down colour menu/brightness slider: Click briefly on the torso (it should flash blue to indicate it is selected). Using the drop down colour map on the upper menu bar, and the brightness slider tool, you can control the LED output for the torso.
Legs Pelvis LEDs Drop-down colour menu/brightness slider: Click briefly on the legs (they should flash blue to indicate they are selected). Using the drop down colour map on the upper menu bar, and the brightness slider tool, you can control the LED output for the pelvis.


Tips and Tricks for Creating Performances

Audio

Recorded speech is the most natural sounding method of generating a compelling performance, but allows you little editing flexibility. The Acapela TTS engine used by RoboThespian is a strong substitute which with some care and attention can be almost as versatile. Using a natural sounding voice without much speed or shaping distortion will assist in making your performance believable. Experimenting with different spellings and punctuation can help smooth out intonation and timing. As with most animation methods, time spent on small details will have disproportionately rewarding results.

Here are some useful guides and links to help you get the most out if using TTS voices.

Tips, Hints & Tricks (Acapela Group)

TTS - How does it work (Acapela Group)

Vocal Smileys (Acapela Group)

  • The use of vocal smileys only works with certain voices. For example: entering #cough# or #laugh# will generate the appropriate sound.

Voice Overs and Reference Video

  • When recording VO, make sure you leave enough time between sentences or write in pauses, if there is an important gesture or action to follow.
  • Recording VO in clips (sentences or small paragraphs) allows for better control of delivery timing. Clips can be spaced in VR.
  • If creating a video for reference with specific gestures/actions make sure you leave enough time for those to play out.
  • Keep in mind that RT cannot move quite as fast as a human actor. A head nod or body lean must be allowed enough time to play out.


Animation

Planning

  • Carefully plan out what you intend to do with the robot. Time spent planning will save hours when it is time to animate.
  • Write a script, draw a thumbnail sheet or storyboard. Detailing how Robothespian should act and move for each piece of dialogue.
  • Pick out and underline operative words in each piece of dialogue from your script. This will help you make better acting choices.
  • Think about creating strong poses with the robots body. Consider opposites, silhouettes and negative space around the body and how the pose or gesture reads from a distance.
  • Act out the script and become familiar with how it should be delivered. Video yourself so you have reference to review if needed.

AnimatingVR thumbnails.jpg


Animating

There are two common methods for how 3D animation is created. The first is called 'Straight Ahead' where the animator will not plan too much, and animate the finished details as they go. This method can produce good results, but can be frustrating if changes are required later in production. The other method is often called 'Pose to Pose' and involves building up your animation with careful planning, adding layers or passes of detail as you go.

The method which we would recommend and seems to work best with the Virtual Robothespian tools is the Pose to Pose method.

Keep in mind the frame rate for the Virtual Robothespian is 10 frames per second. The timeline is divided in 1/10th of second increments.

There are no rules to how you might go about animating. Everyone likes to animate differently. Below is a list of guidelines to keep in mind when starting a project.


  • Plan to animate by building up your work in passes. A finished animation sequence takes time to create and can become complex if you don't have a logical approach.
  • Start by organizing your audio and sign off on it. Doing this will minimize changes down the road and save time.
  • Become familiar with Key Poses, Breakdowns and Inbetweens. These are animation terms which describe different types of keys.
  • Consider making use of the stepped,linear and spline features. These can make all the difference when working out movements.
  • First pass - Key all the key poses. These are the strongest most important poses which define actions and gestures.
  • Move key poses around to get the desired timing. At this stage you should be able to see which poses are working well.
  • Second pass - Add breakdown poses. These are crucial to creating interesting movements and better timing.
  • Third pass - Add inbetween poses. These refine the movements in between the key poses and breakdowns.
  • When animating at 10 frames per second, you may find there is not enough room for breakdowns and inbetweens in the traditional sense.
  • Consider Robothespian's real world physical limitations, such as head and torso movement speeds and keeping the arms in safe positions.
  • The real life robot can only move at a certain speed which could be slower than the timing you have specified in placing your key poses.
  • Leave eye darts (rapid eye movements) and blinks to the very last. Doing so will keep you animation easier to manage until you are happy with the body movement.
  • Create nice arcs with the arms and head. Breakdown and inbetween poses help define smoother more natural movement arcs.
  • Try not to twin movements. Twinning is a term used in animation where body parts tend to mirror one another and are synchronized. Your key pose can be a mirror, just try to make the movement to the pose asynchronous. Even the slightest delay will help make the movement nicer if everything has some offset. This is currently hard to achieve with the current VR tools.
  • Use moving holds. When people move they don't gesture and hold still, they keep moving. Subtle movement in and out of gesture poses helps sell the performance.
  • KISS "Keep it simple, stupid' is a valuable acronym to keep in mind. Clear poses and simple movements between them will work best on Robothespian.


If things are not working and you are spending too much time being frustrated whilst animating. Maybe its time to take a step back. Don't be afraid to go back to the drawing board and start over. Its always better to spend time planning what you intend to do. The end result will always be better and often get produced faster with a solid plan in place.


Things to avoid

  • Don't give the Robothespian a case of 'RandomBlinkitus'. Too many blinks, which are not timed to any particular gesture, movement or piece of dialogue will make the performance weaker
  • Don't make Robothespian move constantly or thrash about. It looks strange, doesn't engage people and will potential cause a mechanical wear and tear.
  • Don't make Robothespian gesture too much with each piece of dialogue.
  • Keep in mind that RT cannot touch his own face or body.
  • Try not to position the arms above the head, whilst twisted back. This will potentially cause the arm to flip backwards and suddenly forwards again.


Performance Organisation

When animating a performance from scratch, its good practice to keep your timeline as clean and simple as possible. This will make future editing and late changes easy to accommodate. Currently the best method for organising your projects in VR, is to use both motion tracks. Keeping all the body keys on one track and the eye and lighting keys on the other. Separating the eye and lighting keys from the body keys, allows you to concentrate on the body first. Using this method should produce faster results, by leaving the small details to last. The eyes are the most time consuming and important part of animating RT. They are dependent on head and body position, which is why its important to animate the body first.


Basic Guide to Animation Pipeline

Advanced Guide to Animation Pipeline

Create a Two Robot Performance

Projector Face Animation Process

Guide to biomimetic behaviour

Troubleshooting

403 Forbidden error

If you see the following error either on the Virtaul.Robothespian.co.uk site, or when accessing Virtual RoboThespian via Robot Management on a robot:

Virtual RoboThespian 403 forbidden error.jpg

Please just hit the Log Out button in the top right and login again.

This error can occur if you have logged in from different machines / browsers and is a result of using a single sign in system for Robot Management and Virtual RoboThespian.

Plays in Virtual RoboThespian, but not on real RoboThespian

Please check that there is nothing on the timeline set to start before 0s. If so, move them to 0s or above. Otherwise sequence will not play on the real RoboThespian.

Do not add any elements, e.g. (b) before zero seconds on the timeline (a). This will stop the sequence playing on a real RoboThespian.

Element before zero seconds annotated.png