3D human interface apparatus using motion recognition based on dynamic image processing
First Claim
1. A 3D human interface apparatus for facilitating 3D pointing and controlling command inputs by an operator, comprising:
- image input means for entering a plurality of time series images of an object operated by the operator into a motion representing a command;
feature point extraction means for extracting at least four feature points including at least three reference feature points and one fiducial feature point on the object, from each of said images;
motion recognition means for recognizing the motion of the object by calculating motion parameters, according to an arline transformation determined from changes of positions of the reference feature points on said images, and a virtual parallax for the fiducial feature point expressing a difference between an actual position change on said images and a virtual position change according to the affine transformation; and
command input means for inputting the command indicated by the motion of the object recognized by the motion recognition means.
1 Assignment
0 Petitions
Accused Products
Abstract
A 3D human interface apparatus using a motion recognition based on a dynamic image processing in which the motion of an operator operated object as an imaging target can be recognized accurately and stably. The apparatus includes: an image input unit for entering a plurality of time series images of an object operated by the operator into a motion representing a command; a feature point extraction unit for extracting at least four feature points including at least three reference feature points and one fiducial feature point on the object, from each of the images; a motion recognition unit for recognizing the motion of the object by calculating motion parameters, according to an affine transformation determined from changes of positions of the reference feature points on the images, and a virtual parallax for the fiducial feature point expressing a difference between an actual position change on the images and a virtual position change according to the affine transformation; and a command input unit for inputting the command indicated by the motion of the object recognized by the motion recognition unit.
633 Citations
15 Claims
-
1. A 3D human interface apparatus for facilitating 3D pointing and controlling command inputs by an operator, comprising:
-
image input means for entering a plurality of time series images of an object operated by the operator into a motion representing a command; feature point extraction means for extracting at least four feature points including at least three reference feature points and one fiducial feature point on the object, from each of said images; motion recognition means for recognizing the motion of the object by calculating motion parameters, according to an arline transformation determined from changes of positions of the reference feature points on said images, and a virtual parallax for the fiducial feature point expressing a difference between an actual position change on said images and a virtual position change according to the affine transformation; and command input means for inputting the command indicated by the motion of the object recognized by the motion recognition means. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A 3D human interface apparatus for facilitating 3D pointing and controlling command inputs by an operator, comprising:
-
image input means for entering a plurality of time series images of an object operated by the operator into a motion representing a command; feature point extraction means for extracting a multiplicity of feature points distributed over an entire imaging field of said images; object detection means for detecting the object in said images by calculating translational motion directions on said images for said multiplicity of feature points, plotting a distribution plot of the translational motion directions over the entire imaging field, and separating an area having identical and distinctive translational motion directions distributed therein in the distribution plot as the object; motion recognition means for recognizing the motion of said area separated as the object by image processing said images; and command input means for inputting the command indicated by the motion of the object recognized by the motion recognition means. - View Dependent Claims (13)
-
-
14. A 3D human interface apparatus for facilitating 3D pointing and controlling command inputs by an operator, comprising:
-
image input means for entering a plurality of images of an object operated by the operator into a motion representing a command; feature point extraction means for extracting a multiplicity of feature points distributed over the object in said images; structure extraction means for extracting a structure of the object by determining relative positions of said multiplicity of feature points in a depth direction on said images according to a ratio of virtual parallaxes for each two closely located ones of said multiplicity of feature points, each virtual parallax expressing a difference between an actual position change on said images and a virtual position change according to an arline transformation determined from changes of positions of the feature points on said images; motion recognition means for recognizing the motion of the object by image processing said images; command input means for inputting the command indicated by the motion of the object recognized by the motion recognition means; and display means for displaying a 3D model having the structure of the object as extracted by the structure extraction means which moves in accordance with the motion indicated by the command inputted by the command input means. - View Dependent Claims (15)
-
Specification