Rapid Avatar

We generate a 3D character from a human subject using commodity sensors and near-automatic processes. While a digital double of a person can be built using a standard 3D pipeline, we are able to generate a photorealistic avatar using commodity hardware, no artistic or technical intervention, in approximately 25 minutes for essentially no cost.

bodycapture facescan

YouTube video showing Rapid Avatar process:

Kinect capture and generation of 3D models

Using a single Microsoft Kinect v1 and no outside assistance, we are able to generate a 3D model by scanning the subject from 4 angles, with a total processing time of less than 3 minutes.



YouTube video showing process:



Avatar Gestures

It is important to incorporate more than just appearance into these avatars. The avatars must not only look like, but behave like their real human counterparts. We perform a study where we ‘avatarize’ two people, apply their own and each other’s gestural styles to the avatars, and ask people which avatar better represents the original capture subject.

YouTube video:


Raspberry Pi-based photogrammetry cage

We build a low-cost photogrammetry cage using 100+ raspberry pi units. The system is able to generate a photorealistic avatar of the following quality in about 5 minutes:

Original design:

YouTube video showing our results:



Automatic rigging and reshaping of a 3D scan-generated model

We automatically rig a 3D model generated from a photogrammetry- or RGB-D-based scan and allow the reshaping of that model based on statistical information of the human form.



YouTube video:


Automated Capture and generation of photorealistic blendshapes

An automated process to allow an unaided user to capture a set of static facial poses from a single RGB-D sensor then automatically reconstruct a set of blendshapes from those scans. We automatically integrate the separate body and face models into a single 3D controllable 3D character.

YouTube video results 1:

YouTube video results 2:



Hand synthesis on a photorealistic scan

Many photorealistic scans do not generate enough detail to automatically construct a detailed hand and finger model. We generate an articulated hand model from a photorealistic scan using a morphable model as part of the automatic rigging step.

Applications – Digital Doctors

Photorealism might be important for authenticity or to establish trust. One such opportunity is opportunity to interact with a specific doctor, perhaps a world expert in his/her field.

YouTube video showing app and sample interaction:

Applications – Photorealistic Avatars for Communication

The fast and automated generation of photorealistic avatars enables the possibility of using digital versions of ourselves for various purposes, such as communication. We might choose to communicate through our avatars as we create ‘idealized’ versions of ourselves.


Applications – Use of photorealistic avatars?

Other than narcissism and ego, what is the use or value of creating these 3D avatars that look like ourselves?
Does playing yourself in a simulation change your behavior in a simulation? We scanned 100+ people and they controlled themselves in a maze game. Result = no change in behavior, but players were more interested in the task because they were playing ‘themselves’. (Best Presentation award at ACM SIGGRAPH Motion in Games 2016 conference).



Mobile Virtual Humans

Virtual humans are being shown to be useful for training and interaction for simulations, but a virtual human on a mobile device changes several fundamentals of that interaction. For example, a mobile virtual human is exempt from time and place that typically restrains interactions with non-mobile virtual humans (such as interactions with a simulation system in a simulation room). In addition, a mobile virtual human is capable of long-term interaction with a user.

We create a platform for quickly generating virtual characters on mobile devices that allows you to easily generate talking, expressive and responsive characters:





We compare how people interact with mobile characters, whether animated, static or through audio-only interfaces:



YouTube video showing interaction samples:

In addition we compare photorealistic characters against 3D characters against audio-only interactions:




Character Animation System

While the replication of motion is well understood, the generation of synthetic motion through a set of controls is an open problem in animation research. We build and distribute a character animation system that allows you to construct simulations with characters that can walk, talk, gesture and manipulate objects in their environments. Animation features include lip sync to speech, online retargeting, automatic rigging, reaching/grasping/touching, gazing, saccadic eye movements, coordinate facial control, example-based locomotion, steering around static and dynamic obstacles, dynamic simulation and others.

Papers (lots of them):


YouTube videos: