Learning assessment method and device using a virtual tutor
First Claim
1. A method using a synthesized virtual tutor in a learning assessment device for providing performance assessment of a subject performing an action imitation task, comprising the steps of:
- using at least a video action acquisition and analysis module to acquire a first action video of a first target for analyzing a first action-feature of said first target performing a first action;
using said at least a video action acquisition and analysis module to acquire a second action video of a second target for analyzing a second action-feature of said second target performing a second action imitating said first action;
establishing an intrinsic model of said second target by using reference data of said second target, said intrinsic model being constructed with a multidimensional morphable model by using image textures and motion flows of said second target;
generating a synthetic video with a virtual tutor having said image textures and motion flows of said second target but exhibiting a synthesized action-feature with behavior similar to said first action-feature based on a behavior model of said first target, said behavior model being constructed by using a set of reference data of said first target with a model transfer process and a model adaptation process, said model transfer process being composed of image texture matching and motion flow matching procedures for finding a set of prototype images for said first target with a matching-by-synthesis approach; and
assessing image texture and motion flow differences between said second action-feature and said synthesized action-feature through a learning assessment module;
wherein said image textures and motion flows of said second target are trained from a set of prototype images selected from said reference data of said second target with said multidimensional morphable model, and said synthetic video with said virtual tutor is generated by compositing said image textures and motion flows of said second target using said intrinsic model of said second target to form each frame of said synthetic video by warping and combining said prototype images with parameters generated according to said behavior model of said first target; and
wherein said model transfer process adopts said matching-by-synthesis approach further comprising the steps of;
establishing a set of key correspondences between a reference image of said first target and a reference image of said second target to derive dense point correspondences as an image warping function between said first and second targets;
generating a set of synthetic prototype motion flows for said first target by warping motion flows of said prototype images of said second target with said image warping function, and searching for an initial set of prototype images from said reference data of said first target whose motion flows are best matched to said synthetic prototype motion flows;
generating a set of synthetic prototype image textures by warping and combining said initial set of prototype images using linear dependency between said prototype images of said second target; and
iteratively searching for an updated set of prototype images from said reference data of said first target whose image textures and motion flows are best matched to said synthetic prototype image textures and motion flows, and taking said updated set prototype images as said set of prototype images for said first target.
1 Assignment
0 Petitions
Accused Products
Abstract
Disclosed is a learning assessment method and device using a virtual tutor. The device comprises at least one action acquisition module, a virtual tutor synthesis module, and a learning assessment module. The method captures and analyzes a first and a second action-feature for a first and a second targets respectively, and constructs an intrinsic model of the second target based on a reference data of the second target. A virtual tutor is synthesized by applying the first action-feature to the intrinsic model such that the virtual tutor exhibits the intrinsic characteristics of the second target but performs a synthesized action-feature similar to the first action-feature. The method then assesses the difference between the synthesized action-feature and the second action-feature.
41 Citations
11 Claims
-
1. A method using a synthesized virtual tutor in a learning assessment device for providing performance assessment of a subject performing an action imitation task, comprising the steps of:
-
using at least a video action acquisition and analysis module to acquire a first action video of a first target for analyzing a first action-feature of said first target performing a first action; using said at least a video action acquisition and analysis module to acquire a second action video of a second target for analyzing a second action-feature of said second target performing a second action imitating said first action; establishing an intrinsic model of said second target by using reference data of said second target, said intrinsic model being constructed with a multidimensional morphable model by using image textures and motion flows of said second target; generating a synthetic video with a virtual tutor having said image textures and motion flows of said second target but exhibiting a synthesized action-feature with behavior similar to said first action-feature based on a behavior model of said first target, said behavior model being constructed by using a set of reference data of said first target with a model transfer process and a model adaptation process, said model transfer process being composed of image texture matching and motion flow matching procedures for finding a set of prototype images for said first target with a matching-by-synthesis approach; and assessing image texture and motion flow differences between said second action-feature and said synthesized action-feature through a learning assessment module; wherein said image textures and motion flows of said second target are trained from a set of prototype images selected from said reference data of said second target with said multidimensional morphable model, and said synthetic video with said virtual tutor is generated by compositing said image textures and motion flows of said second target using said intrinsic model of said second target to form each frame of said synthetic video by warping and combining said prototype images with parameters generated according to said behavior model of said first target; and wherein said model transfer process adopts said matching-by-synthesis approach further comprising the steps of; establishing a set of key correspondences between a reference image of said first target and a reference image of said second target to derive dense point correspondences as an image warping function between said first and second targets; generating a set of synthetic prototype motion flows for said first target by warping motion flows of said prototype images of said second target with said image warping function, and searching for an initial set of prototype images from said reference data of said first target whose motion flows are best matched to said synthetic prototype motion flows; generating a set of synthetic prototype image textures by warping and combining said initial set of prototype images using linear dependency between said prototype images of said second target; and iteratively searching for an updated set of prototype images from said reference data of said first target whose image textures and motion flows are best matched to said synthetic prototype image textures and motion flows, and taking said updated set prototype images as said set of prototype images for said first target. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A device using a synthesized virtual tutor for providing performance assessment of a subject performing an action imitation task, comprising:
-
at least a video action acquisition and analysis module for acquiring a first action video of a first target and analyzing a first action-feature of said first target performing a first action, and acquiring a second action video of a second target and analyzing a second action-feature of said second target performing a second action imitating said first action; a virtual tutor synthesis module for generating a synthetic video with a virtual tutor having image textures and motion flows of said second target but exhibiting a synthesized action-feature with behavior similar to said first action-feature based on a behavior model of said first target, said behavior model being constructed by using a set of reference data of said first target with a model transfer process and a model adaptation process, said model transfer process being composed of image texture matching and motion flow matching procedures for finding a set of prototype images for said first target with a matching-by-synthesis approach; and a learning assessment module for assessing image texture and motion flow differences between said second action-feature and said synthesized action-feature; wherein said image textures and motion flows of said second target are trained from a set of prototype images selected from said reference data of said second target with said multidimensional morphable model, and said synthetic video with said virtual tutor is generated by compositing said image textures and motion flows of said second target using said intrinsic model of said second target to form each frame of said synthetic video by warping and linearly combining said prototype images with parameters generated according to said behavior model of said first target; and wherein said model transfer module adopts said matching-by-synthesis approach through comprising the steps of; establishing a set of key correspondences between a reference image of said first target and a reference image of said second target to derive dense point correspondences as an image warping function between said first and second targets; generating a set of synthetic prototype motion flows for said first target by warping motion flows of said prototype images of said second target with said image warping function, and searching for an initial set of prototype images from said reference data of said first target whose motion flows are best matched to said synthetic prototype motion flows; generating a set of synthetic prototype image textures by warping and combining said initial set of prototype images using linear dependency between said prototype images of said second target; and iteratively searching for an updated set of prototype images from said reference data of said first target whose image textures and motion flows are best matched to said synthetic prototype image textures and motion flows, and taking said updated set prototype images as said set of prototype images for said first target. - View Dependent Claims (10, 11)
-
Specification