×

Learning assessment method and device using a virtual tutor

  • US 8,021,160 B2
  • Filed: 10/06/2006
  • Issued: 09/20/2011
  • Est. Priority Date: 07/22/2006
  • Status: Expired due to Fees
First Claim
Patent Images

1. A method using a synthesized virtual tutor in a learning assessment device for providing performance assessment of a subject performing an action imitation task, comprising the steps of:

  • using at least a video action acquisition and analysis module to acquire a first action video of a first target for analyzing a first action-feature of said first target performing a first action;

    using said at least a video action acquisition and analysis module to acquire a second action video of a second target for analyzing a second action-feature of said second target performing a second action imitating said first action;

    establishing an intrinsic model of said second target by using reference data of said second target, said intrinsic model being constructed with a multidimensional morphable model by using image textures and motion flows of said second target;

    generating a synthetic video with a virtual tutor having said image textures and motion flows of said second target but exhibiting a synthesized action-feature with behavior similar to said first action-feature based on a behavior model of said first target, said behavior model being constructed by using a set of reference data of said first target with a model transfer process and a model adaptation process, said model transfer process being composed of image texture matching and motion flow matching procedures for finding a set of prototype images for said first target with a matching-by-synthesis approach; and

    assessing image texture and motion flow differences between said second action-feature and said synthesized action-feature through a learning assessment module;

    wherein said image textures and motion flows of said second target are trained from a set of prototype images selected from said reference data of said second target with said multidimensional morphable model, and said synthetic video with said virtual tutor is generated by compositing said image textures and motion flows of said second target using said intrinsic model of said second target to form each frame of said synthetic video by warping and combining said prototype images with parameters generated according to said behavior model of said first target; and

    wherein said model transfer process adopts said matching-by-synthesis approach further comprising the steps of;

    establishing a set of key correspondences between a reference image of said first target and a reference image of said second target to derive dense point correspondences as an image warping function between said first and second targets;

    generating a set of synthetic prototype motion flows for said first target by warping motion flows of said prototype images of said second target with said image warping function, and searching for an initial set of prototype images from said reference data of said first target whose motion flows are best matched to said synthetic prototype motion flows;

    generating a set of synthetic prototype image textures by warping and combining said initial set of prototype images using linear dependency between said prototype images of said second target; and

    iteratively searching for an updated set of prototype images from said reference data of said first target whose image textures and motion flows are best matched to said synthetic prototype image textures and motion flows, and taking said updated set prototype images as said set of prototype images for said first target.

View all claims
  • 1 Assignment
Timeline View
Assignment View
    ×
    ×