Markerless motion capture using machine learning and training with biomechanical data
First Claim
Patent Images
1. A method of training a learning machine to receive video data captured from an animate subject, and from the video data to generate biomechanical states of the animate subject, comprising:
- placing markers on the animate subject;
using both marker-based motion capture camera(s) and markerless motion capture camera(s) to simultaneously acquire video sequences of the animate subject, thereby acquiring marker-based video data and markerless video data;
wherein the marker-based camera(s) detect the markers on the animate subject in a manner differently from detection of the rest of the animate subject;
fitting the marker-based video data to a kinematic model of the animate subject, thereby providing a ground truth dataset;
combining the ground truth dataset with the markerless video data, thereby providing a training dataset;
inputting the markerless video data to the learning machine;
comparing the output of the learning machine to the training dataset;
iteratively using the results of the comparing step to adjust operation of the learning machine; and
using the learning machine to generate at least one of the biomechanical states of the animate subject.
1 Assignment
0 Petitions
Accused Products
Abstract
A method of using a learning machine to provide a biomechanical data representation of a subject based on markerless video motion capture. The learning machine is trained with both markerless video and marker-based (or other worn body sensor) data, with the marker-based or body worn sensor data being used to generate a full biomechanical model, which is the “ground truth” data. This ground truth data is combined with the markerless video data to generate a training dataset.
-
Citations
8 Claims
-
1. A method of training a learning machine to receive video data captured from an animate subject, and from the video data to generate biomechanical states of the animate subject, comprising:
-
placing markers on the animate subject; using both marker-based motion capture camera(s) and markerless motion capture camera(s) to simultaneously acquire video sequences of the animate subject, thereby acquiring marker-based video data and markerless video data; wherein the marker-based camera(s) detect the markers on the animate subject in a manner differently from detection of the rest of the animate subject; fitting the marker-based video data to a kinematic model of the animate subject, thereby providing a ground truth dataset; combining the ground truth dataset with the markerless video data, thereby providing a training dataset; inputting the markerless video data to the learning machine; comparing the output of the learning machine to the training dataset; iteratively using the results of the comparing step to adjust operation of the learning machine; and using the learning machine to generate at least one of the biomechanical states of the animate subject. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A method of training a learning machine to receive video data captured from an animate subject, and from the video data to generate biomechanical states of the animate subject, comprising:
-
placing one or more biomechanical sensors on the animate subject; using both sensor detector and markerless motion capture camera(s) to simultaneously acquire video sequences of the animate subject, thereby acquiring sensor detector data and markerless video data; wherein the sensor detector data is data that acquired by detecting the one or more biomechanical sensors as the animate subject moves; fitting the sensor detector data to a kinematic model of the animate subject, thereby providing a ground truth dataset; combining the ground truth dataset with the markerless video data, thereby providing a training dataset; inputting the markerless video data to the learning machine; comparing the output of the learning machine to the training dataset; and iteratively using the results of the comparing step to adjust operation of the learning machine; and using the learning machine to generate at least one of the biomechanical states of the animate subject. - View Dependent Claims (8)
-
Specification