Projects

Rapid Avatar

We generate a 3D character from a human subject using commodity sensors and near-automatic processes. While a digital double of a person can be built using a standard 3D pipeline, we are able to generate a photorealistic avatar using commodity hardware, no artistic or technical intervention, in approximately 25 minutes for essentially no cost.

bodycapture facescan

YouTube video showing Rapid Avatar process: https://youtu.be/bVW9pEIv3is

Kinect capture and generation of 3D models

Using a single Microsoft Kinect v1 and no outside assistance, we are able to generate a 3D model by scanning the subject from 4 angles, with a total processing time of less than 3 minutes.

Software: http://smartbody.ict.usc.edu/fast-avatar-capture-software-download

Paper: http://onlinelibrary.wiley.com/doi/10.1002/cav.1579/pdf

YouTube video showing process: https://youtu.be/wzmI6v2LkJA?list=PL38CB86DA1BC151F7

jay_looking

 

Avatar Gestures

It is important to incorporate more than just appearance into these avatars. The avatars must not only look like, but behave like their real human counterparts. We perform a study where we ‘avatarize’ two people, apply their own and each other’s gestural styles to the avatars, and ask people which avatar better represents the original capture subject.

Paper: http://www.arishapiro.com/MIG2014_gestures.pdf
YouTube video: https://youtu.be/4pZbTdrVYpc

juliagaleteaser

Raspberry Pi-based photogrammetry cage

We build a low-cost photogrammetry cage using 100+ raspberry pi units. The system is able to generate a photorealistic avatar of the following quality in about 5 minutes:

Original design: http://www.pi3dscan.com

YouTube video showing our results: https://youtu.be/v5Z6SRIb4U0?list=PL38CB86DA1BC151F7
picageresults

picapturecage

 

Automatic rigging and reshaping of a 3D scan-generated model

We automatically rig a 3D model generated from a photogrammetry- or RGB-D-based scan and allow the reshaping of that model based on statistical information of the human form.

Paper: http://www.arishapiro.com/AvatarRiggingAndReshaping_FengCasasShapiro.pdf

Software: http://smartbody.ict.usc.edu/autoriggerandreshaper

YouTube video: https://www.youtube.com/watch?v=e1kCK3DKrNA&feature=player_embedded

rigreshape

Automated Capture and generation of photorealistic blendshapes

An automated process to allow an unaided user to capture a set of static facial poses from a single RGB-D sensor then automatically reconstruct a set of blendshapes from those scans. We automatically integrate the separate body and face models into a single 3D controllable 3D character.

YouTube video results 1: https://youtu.be/cQ8QjEZ6gwE?list=PL38CB86DA1BC151F7

YouTube video results 2: https://youtu.be/bif1BuxlU5w?list=PL38CB86DA1BC151F7

Paper: http://www.arishapiro.com/RapidBlendshapeModeling_CASA2016.pdf

schultz2

Hand synthesis on a photorealistic scan

Many photorealistic scans do not generate enough detail to automatically construct a detailed hand and finger model. We generate an articulated hand model from a photorealistic scan using a morphable model as part of the automatic rigging step.
handmorphable

Applications – Digital Doctors

Photorealism might be important for authenticity or to establish trust. One such opportunity is opportunity to interact with a specific doctor, perhaps a world expert in his/her field.
docon

YouTube video showing app and sample interaction: https://www.youtube.com/watch?v=NMEMZSCGI1Q

Applications – Photorealistic Avatars for Communication

The fast and automated generation of photorealistic avatars enables the possibility of using digital versions of ourselves for various purposes, such as communication. We might choose to communicate through our avatars as we create ‘idealized’ versions of ourselves.

backchannel2

Applications – Use of photorealistic avatars?

Other than narcissism and ego, what is the use or value of creating these 3D avatars that look like ourselves?
Does playing yourself in a simulation change your behavior in a simulation? We scanned 100+ people and they controlled themselves in a maze game. Result = no change in behavior, but players were more interested in the task because they were playing ‘themselves’. (Best Presentation award at ACM SIGGRAPH Motion in Games 2016 conference).

Paper: http://www.arishapiro.com/Effectdoppleganger_MIG16.pdf

finishline


Mobile Virtual Humans

Virtual humans are being shown to be useful for training and interaction for simulations, but a virtual human on a mobile device changes several fundamentals of that interaction. For example, a mobile virtual human is exempt from time and place that typically restrains interactions with non-mobile virtual humans (such as interactions with a simulation system in a simulation room). In addition, a mobile virtual human is capable of long-term interaction with a user.

We create a platform for quickly generating virtual characters on mobile devices that allows you to easily generate talking, expressive and responsive characters:

Paper: http://www.arishapiro.com/APlatformForBuildingMobileVirtualHumans.pdf

Software: http://smartbody.ict.usc.edu/mobilevirtualhumans

 

vhvanilla_naturalvoice2

We compare how people interact with mobile characters, whether animated, static or through audio-only interfaces:

mobilestudy1

Paper: https://youtu.be/ukECRKLBSJ8

YouTube video showing interaction samples: https://youtu.be/ukECRKLBSJ8

In addition we compare photorealistic characters against 3D characters against audio-only interactions:

mobilestudy2

Paper: http://www.arishapiro.com/Study_comparing_video_and_mobile_MIG2016.pdf

 


Character Animation System

While the replication of motion is well understood, the generation of synthetic motion through a set of controls is an open problem in animation research. We build and distribute a character animation system that allows you to construct simulations with characters that can walk, talk, gesture and manipulate objects in their environments. Animation features include lip sync to speech, online retargeting, automatic rigging, reaching/grasping/touching, gazing, saccadic eye movements, coordinate facial control, example-based locomotion, steering around static and dynamic obstacles, dynamic simulation and others.
reachsphere

Papers (lots of them): http://smartbody.ict.usc.edu/publications

Software: http://smartbody.ict.usc.edu

YouTube videos: http://smartbody.ict.usc.edu/video