Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications...

39
Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn ©2003- 2016, The Ohio State University

Transcript of Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications...

Page 1: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Motion Capture History, Technologies and Applications

Advanced Computing Center for the Arts and Design

Ohio State University

Vita Berezina-Blackburn

©2003- 2016, The Ohio State University

Page 2: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Motion Capture

• motion capture (mocap) is sampling and recording motion of humans, animals and inanimate objects as 3d data for analysis, playback and remapping

• performance capture is acting with motion capture in film and games• motion tracking is real-time processing of motion capture data

©2003- 2016, The Ohio State University

Page 3: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

History of Motion Capture

• Eadweard Muybridge (1830-1904)

• Etienne-Jules Marey (1830-1904)

• Nikolai Bernstein (1896-1966)

• Harold Edgerton (1903-1990)

• Gunnar Johansson (1911- 1998)

©2003- 2016, The Ohio State University

Presenter
Presentation Notes
Marey and Muybridge conducted human and animal motion studies by shooting multiple photographs of moving subjects over a short period of time. Their work had a large impact on many disciplines such as biology, medicine, photography, and animation. Harold Edgerton was the inventor of flash photography.
Page 4: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Eadward Muybridge

• the flying horse• 20,000 photos of animal and human

locomotion• UK-USA, 1872

© Kingston Museum

©2003- 2016, The Ohio State University

Presenter
Presentation Notes
Edward Muybridge first became involved in the photography of movement in 1872 when he was asked by Leland Stanford, the former Governor of California, to photograph his horse Occident. At the time, it was not known if a horse ever had all four feet off the ground while galloping. Muybridge was able to prove this true in his initial studies. Stanford subsequently financed a more elaborate investigation at Palo Alto from 1878-1879 where cameras were placed in a line fitted with a special shutter that could be triggered electro-magnetically by the horse or wheels of a carriage as it made contact with wires stretched across the track. He was thus the first person to photograph sequences of movement. He later created a moving image from his still sequences with his invention of the zoopraxiscope. Muybridge took more than 20,000 photographs from 1884-85 of men, women, children, animals and birds in almost every conceivable type of movement resulting in the most comprehensive analysis of movement. His work was published under the title “Animal Locomotion” in 1887 and is still used widely today as a source of illustration and reference.
Page 5: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Eadward Muybridge

• zoopraxiscope

© Kingston Museum

©2003- 2016, The Ohio State University

Presenter
Presentation Notes
Edward Muybridge first became involved in the photography of movement in 1872 when he was asked by Leland Stanford, the former Governor of California, to photograph his horse Occident. At the time, it was not known if a horse ever had all four feet off the ground while trotting. Muybridge was able to prove this true in his initial studies. Stanford subsequently financed a more elaborate investigation at Palo Alto from 1878-1879 where cameras were placed in a line fitted with a special shutter that could be triggered electro-magnetically by the horse or wheels of a carriage as it made contact with wires stretched across the track. He was thus the first person to photograph sequences of movement. He later created a moving image from his still sequences with his invention of the zoopraxiscope. Muybridge took more than 20,000 photographs from 1884-85 of men, women, children, animals and birds in almost every conceivable type of movement resulting in the most comprehensive analysis of movement. His work was published under the title “Animal Locomotion” in 1887 and is still used widely today as a source of illustration and reference.
Page 6: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Etienne-Jules Marey

• first person to analyze human and animal motion with film

• created chronophotographic gun and fixed plate camera

• France, 1880s

©2003- 2016, The Ohio State University

Presenter
Presentation Notes
Etienne-Jules Marey was a French physician, inventor, photographer, and Professor of Natural History who specialised in human and animal physiology. He first became introduced to the study of motion when he spoke with Muybridge in 1880 about his pictures of horse locomotion. When Marey discovered that Muybridge had not had any success with photographing birds in flight, he decided to tackle the problem himself. To solve this problem, Marey invented a photographic gun. It consisted of a rotating wheel with slits. When light passed through the slit, a photographic plate was exposed. The gun had a clock-mechanism so that when the shutter was tripped it made twelve exposures of 1/72nd of a second each. Marey’s study of the changes in the shape of birds' wings during flight in relation to air resistance was a major contribution to current knowledge of aerodynamics. He also invented the choronophotograph which were multiple exposures on single glass plates through a camera of his design. He conducted many human studies with his subjects wearing a black suit with metalling threadings. The subjects walked in front of black panels and their movements were recorded by one camera, on a single metal plate. His chronophotographs had an important influence on both science and the arts and helped lay the foundation of motion pictures.
Page 7: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Modern Art

• Futurism (Boccionni, Balla and others)

• Marcel Duchamp

©2003- 2016, The Ohio State University

Page 8: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Rotoscoping

• allowed animators to trace cartoon characters over photographed frames of live performances.

• invented in 1915 by Max Fleischer

• Koko the Clown

• Snow White

©2003- 2016, The Ohio State University

© Walt Disney

Presenter
Presentation Notes
In 1915, using Muybridge’s idea, cartoonist Max Fleischer created the rotoscope, a device that allowed animators to trace cartoon characters over photographed frames of live performers. A time consuming process, mainly used for human motion, it was the first time a real-life performance was used to help create an animated character. The first cartoon character to be rotoscoped was Koko the Clown. It was later used by Walt Disney, in 1937, to get realistic human motion for Snow White and the Prince. Rotoscoping is a two-dimensional approach to capture motion.
Page 9: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Nikolai Bernstein

©2003- 2016, The Ohio State University

• General Biomechanics – 1924, Central Institute of Labor, Moscow

• physiology of sport and labor activities, foundations of ergonomics• cyclography• concepts of degrees of freedom and hierarchical structure of motion

control

Presenter
Presentation Notes
Nikolai Bernestein was a Soviet scientist who introduced the term Biomechanics. He challenged the notion that motor activity is controlled by reflexes and introduced the concept of “degrees of freedom” and hierarchical structure of motion control.
Page 10: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Harold Edgerton

© Palm Press Inc.©2003- 2016, The Ohio State University

• electronic stroboscope and flash• exposures of 1/1000th to 1/1000000 sec• MIT, 1930s-1960

Presenter
Presentation Notes
Harold Edgerton, an MIT scientist is known for developing the electronic stroboscope and electronic flash for photographic illumination. He developed the stroboscope in 1931 for ultra-high-speed and stop motion photography. In 1932, he began taking high-speed photographs of familiar activities that move at speeds beyond the ability of the human eye to perceive. His most well known photograph being the coronet on a drop of milk. His images were revolutionary because they were taken with exposures between thousandths and up to one millionth of a second, and revealed more than the eye could see.
Page 11: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

GUNNAR JOHANSSON

©2003- 2016, The Ohio State University

• Visual perception of biological motion, experimental psychology, 1970s, University of Uppsala, Sweden

• Retro-reflective patches on joints • Video recording instead of film, search light

mounted very closely to the camera lens, lightreflects from patches into the lens

• Computer modeling of motion variations

Presenter
Presentation Notes
Gunnar Johansson (1911–1998) was a Swedish psychophysicist. He was interested in the Gestalt laws of motion perception in vision. He is best known for his investigations of biological motion.[1][2] He helped develop the rigidity assumption which posits that proximal stimuli that can be perceived as rigid objects are generally perceived as such.
Page 12: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

1980’s Computer Graphics

• military and medical research purposes

• first computer graphics use in research labs

• first production useo Brilliance by Robert Abel , brute force animation technique(1985

Superbowl ad)o Waldo C. Graphic (1988) PDI for Jim Henson touro Mike the Talking Head (Siggraph 88) o Don’t Touch Me (1989)

©2003- 2016, The Ohio State University

Presenter
Presentation Notes
Motion capture has been in use for decades for military and medical purposes. It was first used in computer graphics research in the late 1970’s, early 1980’s at schools such as MIT, Simon Fraser Univ., and New York Institute of Tech. Actual production use began in the mid-1980’s. The use of motion capture for the chrome character known as the sexy robot in the 1985 Superbowl commercial Brilliance for the National Canned Food Information Council by Robert Abel was the first use of motion capture for 3D animation. They used a black magic marker to mark 18 joints on a dancer and then subsequently photographed multiple views as she performed the motion. In 1988, PDI created Waldo C. Graphic for the Jim Henson Hour. They used a custom eight degree of freedom input device to control position and mouth movements of a low resolution character. They were able to capture the motion in real-time in concert with real puppets. The computer image was mixed with a video feed off a camera focused on the puppets so that everyone could perform together. In post-production, Waldo was re-rendered in full resolution adding dynamic elements. Also, in 1988, deGraf/Wahrman developed Mike the Talking Head an interactive animation at Siggraph 88 for Silicon Graphics to show off the real-time capabilities of their new 4D machine. The animated character was controlled in real-time by a puppeteer during the conference. In 1989, Kleiser-Walczak used motion capture for the digital actress Dozo in the music video Don’t Touch Me.
Page 13: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

ACTIVE

•electromechanical•optical fiber•optical: strobing LEDs•acoustic•inertial•optical markerless based on structured light•optical markerless based on video

PASSIVE

•optical: retroreflective markers•acoustic •optical markerless (video based)

Mocap Technologies

©2003- 2016, The Ohio State University

Presenter
Presentation Notes
Motion caapture is basically passive or active. Ones that emit signals and ones that don’t. Motion capture systems can be divided into three areas based on the location of sources and sensors. Inside-in systems have the sensors and sources on the body. An example of this is an electromechanical suit where the sensors are potentiometers and the sources are the actual joints inside the body. The second area, inside-out has sensors on the body that collect external sources such as in electromagnetic systems where the sensors move in an externally generated electromagnetic field. The final area outside-in have external sensors which collect data from sources on the body such as in optical systems where the cameras are the sensors and the reflective markers are the sources.
Page 14: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Optical motion capture systems• light weight, variable size, retro-

reflective markers

• VGA to16 megapixel resolution cameras with strobing LEDs digitize different views of performance

• up to 5000fps

• under 1mm accuracy

• marker occlusion

• capture volume limits

VICON NATURAL POINTMOTION ANALYSISQUALISYS

©2003- 2016, The Ohio State University

Presenter
Presentation Notes
An optical system consists of a computer controlling several light-sensitive cameras placed strategically around the capture space. The cameras capture the light in the field of view and measure the intensity of light for each pixel in the image. The performer wears spherical markers that are covered with a highly reflective tape. The cameras have shutter synchronization and are usually fitted with their own light source that creates a directional reflection from the markers. The views from the different cameras must be calibrated so that the computer knows the location of the different cameras and can determine 3D positions of the markers. You need to have at least three cameras to determine a 3D point in space from 2D images. The advantages of an optical system include large performance areas proportional to the number of cameras, markers can be moved depending on the object to be captured, and the performer is not seriously constrained by the markers. The major disadvantages are extensive post-processing, a controlled environment, and occlusion of markers which to some extent is overcome by having redundant camera coverage from all sides.
Page 15: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Strobing LED marker system• red or Infrared LEDs

• unique strobing frequency for each marker

• no marker swapping

• limited volume

• limited capture time due to battery life for LED

• wires running up and down capture subject

PHASESPACE

©2003- 2016, The Ohio State University

Presenter
Presentation Notes
Srtobing LEDs are fairly new to the industry. The use a set of LEDs that strobe in a sequence so that only one is visible at a set time and the cameras know exactly which marker that is. Some systems use a row of precalibrated cameras. Others use a radio signal in addition to the LED in order to help with occlusion. capture area bordered by 12 modular bars The performer wears an ergonomically-designed, lightweight motion suit fitted with robust, slim-line active IR markers. These are connected to a belt pack by thin, flexible cables. The belt pack communicates with the ReActor 2 PC via a wireless radio link. The ReActor 2 PC performs data processing and outputs time-stamped records to a host computer for instant display of animated motions.
Page 16: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Electromechanical suits

• linked structures• potentiometers determine degree

of rotation for each link • no occlusion• no magnetic or electrical

interference• unlimited capture volume• low cost

• no global translation• restricted movement• fixed configuration of sensors• low sampling rate• inaccurate joints

GYPSY MOCAP SYSTEM

©2003- 2016, The Ohio State University

Presenter
Presentation Notes
Electromechanical suits are based on a set of linked structures which are attached to the performer’s body. The structures are linked using angular measurement devices at the joints. Electromechanical suits do not have to deal with the occlusion problem. The suits are portable and are also less expensive than optical or electromagnetic systems. With multiple suits, it is possible to capture multiple performances. However, the person’s movement is constrained by the armature and the sensors are fixed and cannot be changed without changing the armature.
Page 17: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Inertial systems• inertial trackers placed on joints• measures orientation and position

with accelerometers, gyroscopes, magnetometers on each segment

• UWB RF for position tracking• unlimited capture volume• no occlusion, multiple subjects

• positional drift• translational data needs to be

collected separately• battery packs and wires on the

performer’s body.

XSENS MOCAP SYSTEM

©2003- 2016, The Ohio State University

Presenter
Presentation Notes
Electromagnetic systems use a centrally located transmitter which emits an electromagnetic field and a set of receivers which are attached to various parts of the body. The receivers can measure their spatial relationship, both their position and orientation, with respect to the transmitter. This system does not have to worry about occlusion problems and both position and orientation information is collected, but the person depending on the system may be constrained by cables and the capture volume is usually not as large as in optical systems.
Page 18: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Electromagnetic systems

• electromagnetic sensors placed on joints or other critical points

• measures orientation and position of sensor relative to electromagnetic field generated by the transmitter

• no sight line requirements

• no occlusion, multiple subjects• electromagnetic interference, small

volume if body translation tracking is needed

ASCENSION-TECHNORTHERN DIGITAL

©2003- 2016, The Ohio State University

Presenter
Presentation Notes
Electromagnetic systems use a centrally located transmitter which emits an electromagnetic field and a set of receivers which are attached to various parts of the body. The receivers can measure their spatial relationship, both their position and orientation, with respect to the transmitter. This system does not have to worry about occlusion problems and both position and orientation information is collected, but the person depending on the system may be constrained by cables and the capture volume is usually not as large as in optical systems.
Page 19: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Optical fiber system

• fiber-optic sensor

• bend and twist sensors measure transmitted light

• no occlusion

• flexible capture volume

• adjustment to individual proportions is limited

• less accurate data

CYBERGLOVE ©2003- 2016, The Ohio State University

Presenter
Presentation Notes
Optical fiber systems such as data gloves use a fiber-optic sensor along the fingers. As the fingers bend, they bend the fiber and the transmitted light is attenuated. The finger joint rotation measurements are based on the strength of the attenuated light.
Page 20: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Acoustic system

• set of transducers/transcievers generate and evaluate high frequency sound wave

• other sounds in frequency range can disrupt capture

• accuracy not as high as other systems

INTERSENSE

©2003- 2016, The Ohio State University

Presenter
Presentation Notes
Acoustic motion capture involves the use of a triad of audio receivers. Audio transmitters are strapped to various parts of the performers body. The transmitters are sequentially triggered to output a "click" and each receiver measures the time it takes for the sound to travel from each transmitter. The calculated distance of the three receivers is triangulated to provide a point in 3D space. The main problem with this approach is the sequential nature of the data positions it creates. Ideally, we would like a snap shot of a performers skeletal position. The position data thus obtained is typically applied to an inverse kinematics system which in turn drives an animated skeleton. One of the big advantages of this method is the lack of occlusion problems normally associated with optical systems. Major disadvantages include the hindrance of the cables, the current systems do not support enough transmitters to accurately capture a detailed performance and the size of the capture area, which is limited by the speed of sound in air and the number of transmitters. In addition, the accuracy of this approach can sometimes be affected by spurious sound reflections.
Page 21: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

©2003- 2016, The Ohio State University

o Max Plank Institute research (3d scanner + silhouette analysis from video)

o Captury

Markerless Motion CaptureFull Body

Presenter
Presentation Notes
Video-based motion analysis has been a research problem in the field of computer vision for over twenty years. Research in the above three areas is hampered by several difficult problems. Various assumptions can be made to simplify the task. The first of the problems is a complex varying environment which can be reduced by requiring a static and/or uniform background, restricting the movements and number of objects and people and restricting the complexity of objects in the field of view. The second problem, segmentation, which involves extracting regions of interest from the image, is hampered by image quality, motion blur, low contrast images, and strong shadows. To make it easier, the subject may be required to wear tight fitting or colored clothes, or have high-contrast markers attached to his/her joints. The final problem, occlusion, occurs when a part of the body is obstructed by an object or other part of the body in the camera view. Early research avoided this problem by not allowing occlusion in a motion sequence but since most natural motion involves occlusion, this assumption is not made as frequently in recent research.
Page 22: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

©2003- 2016, The Ohio State University

o Kinect and other RGB-d sensor development

ORGANIC MOTIONILM and ManhattanMocap Group’s Multitrack System (markers for computer

vision)

Markerless Motion CaptureFull Body

Presenter
Presentation Notes
Video-based motion analysis has been a research problem in the field of computer vision for over twenty years. Research in the above three areas is hampered by several difficult problems. Various assumptions can be made to simplify the task. The first of the problems is a complex varying environment which can be reduced by requiring a static and/or uniform background, restricting the movements and number of objects and people and restricting the complexity of objects in the field of view. The second problem, segmentation, which involves extracting regions of interest from the image, is hampered by image quality, motion blur, low contrast images, and strong shadows. To make it easier, the subject may be required to wear tight fitting or colored clothes, or have high-contrast markers attached to his/her joints. The final problem, occlusion, occurs when a part of the body is obstructed by an object or other part of the body in the camera view. Early research avoided this problem by not allowing occlusion in a motion sequence but since most natural motion involves occlusion, this assumption is not made as frequently in recent research.
Page 23: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Markerless Motion CaptureFace

©2003- 2016, The Ohio State University

FACS/Paul Ekman

Video based:

Original R&D:Digital Emily ProjectFaceware

Medusa (Disney Zurich)

RGB-d based:

Faceshift

Page 24: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

©2003- 2016, The Ohio State University

Leap Sensor

Markerless Motion CaptureHands

Presenter
Presentation Notes
Two distinctly different approaches to surface deformation tracking by Mova (all done in house) and ImageMetrix’ Faceware where video is shot by the user at any resolution with no make-up, uploaded to company website and processed via cloud computing. Analysis relates unique features to an averaged model. For Mova the emphasis is on capture process: see vid. For Imagemetrix – analysis.
Page 25: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Video-based Motion Analysis

©2003- 2016, The Ohio State University

Research Areaso equipment and subject calibrationo motion trackingo 3D movement reconstruction (markerless motion capture)o skeletal solvingo action recognitiono 3D surface reconstruction (surface scanning)

Challenges:o complex environment variabilityo body segmentationo occlusiono data volume

Presenter
Presentation Notes
Video-based motion analysis has been a research problem in the field of computer vision for over twenty years. Research in the above three areas is hampered by several difficult problems. Various assumptions can be made to simplify the task. The first of the problems is a complex varying environment which can be reduced by requiring a static and/or uniform background, restricting the movements and number of objects and people and restricting the complexity of objects in the field of view. The second problem, segmentation, which involves extracting regions of interest from the image, is hampered by image quality, motion blur, low contrast images, and strong shadows. To make it easier, the subject may be required to wear tight fitting or colored clothes, or have high-contrast markers attached to his/her joints. The final problem, occlusion, occurs when a part of the body is obstructed by an object or other part of the body in the camera view. Early research avoided this problem by not allowing occlusion in a motion sequence but since most natural motion involves occlusion, this assumption is not made as frequently in recent research.
Page 26: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Typical Marker Based Optical Motion Capture Pipeline

• planning (performers and actions, props, space requirements)

• recording point data(Vicon Blade)

• data processing, realtime or post standard skeletal solving (Vicon Blade, MotionBuilder, Ikinema )

• skeleton creation (3d animation software)

• remapping standard skeletal motion to customized characters (MotionBuilder)

• binding skeleton to a model (3D animation software)

©2003- 2016, The Ohio State University

Presenter
Presentation Notes
The motion capture pipeline consists of planning, shooting, data processing, skeleton creation, and mapping to characters. The most important point about using motion capture is to avoid problems by planning well ahead. After the shoot, data processing consists of reconstructing the data from the different camera views to produce 3D positional data and labeling the markers. Once this has been done any noise in the data needs to be filtered and gaps in the data due to occlusion of markers needs to be filled.
Page 27: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Optical Marker Based 3D Motion Reconstruction

• Single camerao Model assumptions required

• Multiple cameraso Require at least 2 cameras, unique with 3o Camera calibration

• Motion capture with markerso Use retroreflective markers to simplify video information

©2003- 2016, The Ohio State University

Presenter
Presentation Notes
Pose reconstruction from a single camera requires model assumptions to determine 3D positions. One possibility is to use a geometric model of the human body in addition to edge and/or region features of the image sequence. The model may assist in resolving discrepancies in the reconstruction due to, for example, occlusion of body parts, or it may initiate or drive the analysis by predicting body postures in future frames. In general, the model is kept simple enough to reduce the number of parameters to be determined, but at the same time, complex enough to represent sufficient characteristics of the subject. Pose reconstruction from multiple cameras has the additional work of calibrating the different cameras and determining correspondence between the motion sequences from the different cameras. Motion capture is a specialized form of Pose Reconstruction from video. Rather than processing all the information available in a video sequence, the information is reduced by having a subject where reflective markers. It is now necessary to only locate and process the markers in a motion sequence.
Page 28: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Problems Related to Marker Occlusion

©2003- 2016, The Ohio State University

Presenter
Presentation Notes
Simple trajectory gap occurs when two markers become occluded at the same time and remain occluded for a short time. After they become visible again, the system is capable of identifying them correctly. To arrive at continuous data, gap needs to be filled using linear or spline interpolation. Simple trajectory swap occurs when two markers become occluded at the same time and remain occluded for a short time. After they become visible again, the system identifies them incorrectly and swaps their labels. To arrive at continuous data, markers need to be relabeled and gap filled using linear or spline interpolation. Complex trajectory swap happens when two markers become occluded at the same time and remain occluded for a longer period of time while the markered body segments move in complex ways. When markers become visible, the system identifies them incorrectly and swaps their labels. To arrive at continuous data, markers need to be relabeled and gap filled using kinematic fit interpolation or animated manually.
Page 29: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Skeletal Solving (remapping mocap data to a character model)

• how to make markers move a skeletono photo reference or 3d scan of a performero CG modelo Motion Builder or Ikinemao Vicon Bladeo other methods

• problems with detecting joint centers…

• organization of joint hierarchies

©2003- 2016, The Ohio State University

Page 30: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Planning

• shot list• performance space dimensions• interactions in shot• shots to be blended or looped • length of shots• size and location of props• gross proportional differences for retargeting• camera motion

©2003- 2016, The Ohio State University

Presenter
Presentation Notes
A shot list must be prepared in advance and should include the number of character involved in the performance and any interactions between the characters or props. The dimensions of the performance space required for the shot and the length of the shot should also be indicated. Any shots to be blended or looped and whether the blending will occur in the beginning or end of the shot must also be determined. Finally the size and location of props in the performance space and any gross proportional differences in the characters must also be listed. For example, if the character is wearing armor, the performer might not to swing his arms slightly away from his body so that the arm doesn’t go through the armor on the character. Ideally, it’s better to have the performer wearing a similarly weighted costume as the character will be wearing. It is visually obvious if a person is carrying a light-weight object instead of a heavy-weight object.
Page 31: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Planning• Character/Prop setup

o target skeleton/character topologyo ready stance considerationso space preparation/occlusion removal/ camera stability

• Marker setupo marker redundancy o three markers per segmento place markers close to boneo asymmetryo recognizable configuration

• output format• file naming conventions• frame rate• target software platform• database management• potential technical issues

©2003- 2016, The Ohio State University

Presenter
Presentation Notes
Planning a motion capture shoot involves understanding the objectives of the shoot and how the data is to be used. You should have a storyboard or game design ready and determine the character and marker setup of any characters or props to be motion captured. Character setup deals with locations of joints or bones in the body of the character that will provide final motion and deformations. Marker setup pertains to the locations of markers that are used to collect the data. Character setup depends on marker setup because data collected must be enough to calculate all the information needed by joints. You need to have at least the character setup design decided to come up with a base pose and marker setup. When designing the marker set for the character allow for marker redundancy, at least three markers per rigid segments and make sure the markers are placed close to the bone to reduce marker sliding on the skin. Also, markers should be placed on rigid segments asymmetrically so that, for example, the software can determine the left arm from the right arm.
Page 32: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Virtual Production

• Pioneered for the production of James Cameron’s “Avatar”

• virtual camera• simulcam

©2003- 2016, The Ohio State University

Presenter
Presentation Notes
Some technical issues to keep in mind are the file formats that your target software is capable of handling, file naming conventions and the number of frames per second required. File naming conventions can include the character the file belongs to, the setup used, and the action implemented. It is usually best to use a capture rate that is a multiple of the frames per second required. Finally, you will be handling a lot of files and it is useful to have some kind of database management system implemented.
Page 34: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Applications• Biomedical and Physical Rehabilitation

Mixed Reality Rehabilitation Markerless Gait AnalysisTongue Capture for Speech Therapy

©2003- 2016, The Ohio State University

Page 35: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Applications

• Historical PreservationNative American Performance

• ArtsOpen Ended GroupWalking CityACCAD Motion LabDeakin University Motion Lab ProjectsVirtual Puppets in Landing PlaceRobotic Camera Choreography via Motion Capture

©2003- 2016, The Ohio State University

Page 36: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Applications• Life Sciences

• Engineering

• Military and Law EnforcementVR weapon training with acoustic tracking systemVirtual Crime Scene Simulation

• Sports Golf Training SimulatorVarious Sports Analysis and Training

©2003- 2016, The Ohio State University

Page 37: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

Motion Technology and Integration Researchers

• Univesity of South California (Paul Debevec)• Chris Bregler (NYU, Stanford, Google)• Carnegie Mellon (Jessica Hodgins)• Max Planck Center (Christian Theobalt)• Stanford (Vladlen Koltun)• Synlab at Georgia Tech (Ali Mazalek)

©2003- 2016, The Ohio State University

Page 38: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

©2003- 2016, The Ohio State University

Mocap Studios

Giant Studios

Capture Lab

WETA

House of Moves

Imaginarium Studios

Jim Henson Digital Studio

ILMxLab

Page 39: Motion Capture History, Technologies and …...Motion Capture History, Technologies and Applications Advanced Computing Center for the Arts and Design Ohio State University Vita Berezina-Blackburn

References

1. Menache, Alberto, “Understanding Motion Capture for Computer Animation and Video Games”

2. http://www.kingston.ac.uk/Muybridge/3. http://www.anotherscene.com/cinema/firsts/marey.html4. http://cmp1.ucr.edu/exhibitions/edgerton/edgerton.html5. Siggraph 2001 Course 51: 6. http://www.metamotion.com7. http://gvv.mpi-inf.mpg.de/files/pami2013/jgall_motioncapture_multiple_pami13.pdf8. http://dl.acm.org/citation.cfm?id=26141769. http://www.utdallas.edu/~xxg061000/tongue.pdf10.http://en.wikipedia.org/wiki/Nikolai_Bernstein11.http://masgutovamethod.com/content/overlays/nikolai-bernstein.html12. http://www.theartstory.org/movement-futurism.htm13. Reinhard Klette and Garry Tee “Understanding Human Motion: A Historic Review”14. Johansson, Gunnar. “Visual perception of biological motion and a model for its analysis”,

Perception & Psychophysics, 1973. Vol. 14. No.2. 201·211

©2003- 2016, The Ohio State University