Apparatus for presenting mixed reality shared among operators
First Claim
1. A mixed reality presentation apparatus which generates a three-dimensional virtual image associated with a collaborative operation to be done by a plurality of operators in a common mixed reality environment, and displays the generated virtual image on display devices respectively attached to the plurality of operators, comprising,first sensor means for detecting a status of each of the actuators which are operated by the plurality of operators and move as the collaborative operation progresses;
- second sensor means for detecting a view point position of each of the plurality of operators in an environment of the collaborative operation; and
generation means for generating a three-dimensional model in said common mixed reality environment, said generation means generating three-dimensional virtual images by transforming the three-dimensional model on the basis of an operation result of the collaborative operation that has progressed according to a change in status of each of the plurality of actuators detected by said first sensor means and the view point position of each operator detected by said second sensor means, and outputting the generated three-dimensional virtual images that are viewed from the view point position of each operator to each display device.
2 Assignments
0 Petitions
Accused Products
Abstract
There is disclosed a mixed reality presentation apparatus which generates and displays a three-dimensional virtual image on a see-through display device so as to allow a plurality of players to play a multi-player game in a mixed reality environment. The apparatus has a CCD camera for detecting the mallet positions of the plurality of players, and a sensor for detecting the view point position of each player in the environment of the multi-player game. The apparatus generates a three-dimensional virtual image that represents a game result of the multi-player game that has progressed in accordance with changes in mallet position detected by the CCD camera and is viewed from the view point position of each player detected by the sensor, and outputs the generated image to the corresponding see-through display device. The apparatus determines the motion of each player by detecting infrared rays output from the corresponding mallet on the basis of an image captured by the CCD camera. The view point position detected by the sensor is corrected by specifying the marker in an image obtained by a camera attached to the head of each player, and comparing the marker position in that image with an actual marker position.
366 Citations
62 Claims
-
1. A mixed reality presentation apparatus which generates a three-dimensional virtual image associated with a collaborative operation to be done by a plurality of operators in a common mixed reality environment, and displays the generated virtual image on display devices respectively attached to the plurality of operators, comprising,
first sensor means for detecting a status of each of the actuators which are operated by the plurality of operators and move as the collaborative operation progresses; -
second sensor means for detecting a view point position of each of the plurality of operators in an environment of the collaborative operation; and
generation means for generating a three-dimensional model in said common mixed reality environment, said generation means generating three-dimensional virtual images by transforming the three-dimensional model on the basis of an operation result of the collaborative operation that has progressed according to a change in status of each of the plurality of actuators detected by said first sensor means and the view point position of each operator detected by said second sensor means, and outputting the generated three-dimensional virtual images that are viewed from the view point position of each operator to each display device. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
an image sensing camera which includes maximum moving ranges of the actuators, each of which moving upon operation of each respective operator, with a field of view thereof; and
image processing means for performing image-processing to detect a position of each actuator in an image obtained by said camera.
-
-
3. The apparatus according to claim 1, wherein the actuator includes an illuminator emitting light having a predetermined wavelength, and said first sensor means comprises a camera which is sensitive to the light having the predetermined wavelength.
-
4. The apparatus according to claim 1, wherein the actuator is a mallet operated by a hand of the operator.
-
5. The apparatus according to claim 1, wherein the display device comprises an optical transmission type display device.
-
6. The apparatus according to claim 1, wherein said second sensor means comprises:
-
a generator for generating an AC magnetic field; and
a magnetic sensor attached to the head portion of each operator.
-
-
7. The apparatus according to claim 1, wherein said second sensor means detects a head position and posture of each operator, and calculates a view point position in accordance with the detected head position and posture.
-
8. The apparatus according to claim 1, wherein said generation means comprises:
-
storage means for storing a rule of the collaborative operation;
means for generating a virtual image representing a progress result of the collaborative operation in accordance with the rule stored in said storage means in correspondence with detected changes in position of the plurality of actuators; and
means for generating a three-dimensional virtual image for each view point position by transferring a coordinate position for each view point position of each operator detected by said second sensor means.
-
-
9. A game apparatus having a mixed reality presentation apparatus of claim 1.
-
10. A mixed reality presentation apparatus which generates a three-dimensional virtual image associated with a collaborative operation to be done by a plurality of operators in a common mixed reality environment, and displays the generated virtual image on display devices respectively attached to the plurality of operators, comprising:
-
a camera arranged so as to include a plurality of actuators operated by the plurality of operators in the collaborative operation within a field of view thereof;
actuator position detection means for outputting information relating to positions of the actuators associated with a coordinate system of that environment on the basis of an image sensed by said camera;
sensor means for detecting and outputting a view point position of each of the plurality of operators in the environment of the collaborative operation; and
image generation means for defining a three-dimensional model in a common field of view of the plurality of operators and generating a three-dimensional virtual image of said three-dimensional model as a progress result that is viewed from the view point position of each operator detected by said sensor means to each display device so as to present the progress result of the collaborative operation relating to said three-dimensional model that has progressed according to detected changes in position of the actuator to each operator.
-
-
11. A mixed reality presentation apparatus which generates a three-dimensional virtual image associated with a collaborative operation to be done by a plurality of operators in a predetermined mixed reality environment, and displays the generated virtual image on see-through display devices respectively attached to the plurality of operators, comprising:
-
a first camera which substantially includes the plurality of operators within a field of view thereof;
a first processor arranged so as to calculate operation positions of the plurality of operators on the basis of an image obtained by said first camera;
a detection device detecting a view point position of each operator using a plurality of sensors attached to the plurality of operators;
a plurality of second cameras sensing front fields of the individual operators, at least one second camera being attached to each of the plurality of operators;
a second processor calculating information associated with a line of sight of each operator on the basis of each of images from said plurality of second cameras;
a third processor correcting the view point position of each operator detected by the sensor using the line of sight information from said second processor and outputting the corrected view point position as a position on a coordinate system of the mixed reality environment;
a first image processing device making the collaborative operation virtually progress on the basis of the operation position of each operator calculated by said first processor, and generating three-dimensional virtual images representing results that have changed along with the progress of the collaborative operation for the plurality of operators; and
a second image processing device transferring coordinate positions of the three-dimensional virtual images for the individual operators generated by said first image processing device in accordance with the individual corrected view point positions calculated by said third processor, and outputting the coordinate-transferred images to the see-through display devices.
-
-
12. A method of generating a three-dimensional virtual image associated with a collaborative operation to be done within a common mixed reality environment so as to display the image on display devices attached to a plurality of operators in the common mixed reality environment, comprising:
-
the image sensing step of sensing a plurality of actuators operated by the plurality of operators by a camera that includes the plurality of operators within a field of view thereof;
the actuator position acquisition step of calculating information relating to positions of the actuators associated with a coordinate system of the common mixed reality environment on the basis of the image sensed by the camera;
the view point position detection step of detecting a view point position of each of the plurality of operators in the common mixed reality environment of the collaborative operation on the coordinate system of the common mixed reality environment;
the progress step of making the collaborative operation virtually progress in accordance with changes in position of the plurality of actuators calculated in the actuator position acquisition step; and
the image generation step of generating a three-dimensional model in the common mixed reality environment and outputting a three-dimensional virtual image of said three-dimensional model as a progress result in the progress step that is viewed from the view point position of each operator detected in the view point position detection step to each display device so as to present the progress result in the progress step to each operator. - View Dependent Claims (13, 14)
-
-
15. A mixed reality presentation method for generating a three-dimensional virtual image associated with a collaborative operation to be done by a plurality of operators in a predetermined mixed reality environment, and displaying the generated virtual image on see-through display devices respectively attached to the plurality of operators, comprising:
-
the first image sensing step of capturing an image using a first camera which substantially includes the plurality of operators within a field of view thereof;
the first detection step of detecting operation positions of the plurality of operators on the basis of the image sensed by the first camera;
the second detection step of detecting a view point position of each operator using a plurality of sensors respectively attached to the plurality of operators;
the second image sensing step of sensing a front field of each operator using each of a plurality of second cameras, at least one second camera being attached to each of the plurality of operators;
the line of sight calculation step of calculating information associated with a line of sight of each operator on the basis of each of images obtained from the plurality of second cameras;
the correction step of correcting the view point position of each operator detected by the sensor on the basis of the line of sight information calculated in the line of sight calculation step, and obtaining the corrected view point position as a position on a coordinate system of the mixed reality environment;
the generation step of making the collaborative operation virtually progress on the basis of the operation positions of the individual operators detected in the first detection step, and generating three-dimensional virtual images that represent results of the collaborative operation and are viewed from the view point positions of the plurality of operators; and
the step of transferring coordinate positions of the three-dimensional virtual images for the individual operators in the generation step in accordance with the individual corrected view point positions obtained in the correction step, and outputting the coordinate-transferred images to the see-through display devices. - View Dependent Claims (16)
-
-
17. A position/posture detection apparatus for detecting a position/posture of a predetermined portion of an operator or an object operated by the operator, comprising:
-
a position/posture sensor for measuring a three-dimensional position and posture of the predetermined portion of the operator or the object operated by the operator to output an operator'"'"'s position and posture signal;
a camera sensing images of a first plurality of markers arranged at known positions in the environment;
detection means for processing an image signal from said camera, tracking a marker of the first plurality of markers, and detecting a coordinate value of the tracked marker in a coordinate system; and
calculation means for calculating a position/posture signal representing a position and posture of the operating portion, including correction of the operator'"'"'s position and posture signal outputted from the position/posture sensor based on the coordinate value of the tracked marker detected by said detection means. - View Dependent Claims (18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40)
means for detecting a signal represent a position/posture of the camera;
means for converting three-dimensional coordinates of said first plurality of markers in a world coordinate system into a coordinate value in terms of the image coordinate system, in accordance with the signal representing position/posture of the camera; and
means for identifying a marker to be tracked by comparing the coordinates of the first plurality of markers in the world coordinate system and an image coordinate value of the tracked marker.
-
-
33. The apparatus according to claim 30, wherein the identifying means identifies a marker selected by the selection means in terms of a world coordinate system.
-
34. The apparatus according to claim 33, wherein the identifying means comprises:
-
means for detecting a signal representing a position/posture of the camera;
means for converting a coordinate of the tracked marker in terms of a camera coordinate system into a coordinate value in terms of the world coordinate system; and
selection means for selecting said at least one marker to be tracked by comparing coordinates of the tracked marker and coordinates of the first plurality of markers, in terms of the world coordinate system.
-
-
35. The apparatus according to claim 17, wherein said detection means comprises means for selecting, where said detection means detects a second plurality of markers within an image capture by said camera, one marker to be tracked from among said second plurality of markers.
-
36. The apparatus according to claim 17, wherein the predetermined portion includes a view position of the operator,
said calculation means obtains the position/posture signal at a view point of the operator with correction of said operator'"'"'s position and posture signal based on a distance difference between an image coordinate value of the tracked marker detected by said detection means and a converted coordinate value of the tracked marker which is converted from a known three dimensional coordinate value of the marker in the world coordinate system into the image coordinate system. -
37. The apparatus according to claim 17, wherein the predetermined portion includes a view position of the operator,
said calculation means obtains the position/posture signal at a view point of the operator with correction of said operator'"'"'s position and posture signal based on a distance difference between a coordinate value of the tracked marker which is converted from the camera coordinate system into the world coordinate system and a known three dimensional coordinate value of the marker in the world coordinate system. -
38. The apparatus according to claim 17, wherein the sensor comprises a magnetic sensor mounted on the head of the operator.
-
39. The apparatus according to claim 17, wherein said camera includes a plurality of camera units attached to the operator'"'"'s head;
- and
said detection means tracks the marker in the camera coordinate system.
- and
-
40. The apparatus according to claim 30, wherein said camera includes two cameras units.
-
41. A mixed reality presentation apparatus comprising:
-
a work table having a first plurality of markers arranged at known positions;
a position/posture sensor attached to an operator to detect a head position and posture of the operator and to output an operator'"'"'s head position/posture signal;
a camera being set to capture at least one of the first plurality of markers within a field of view of the camera;
a detection means for processing an image signal from the camera, tracking a marker from among the first plurality of markers, and detecting a coordinate value of a tracked marker;
calculation means for calculating a position/posture signal representing a position and posture of the operator'"'"'s view point, including correction of the operator'"'"'s head position/posture signal outputted from the position/posture sensor based on the coordinate value of the tracked marker detected by said detection means; and
generation means for generating a virtual image for presenting a mixed reality at the view point in accordance with the calculated position/posture signal. - View Dependent Claims (42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55)
means for tracking a marker within an image obtained by the camera; and
means for outputting a coordinate value of the tracked marker in an image coordinate system.
-
-
45. The apparatus according to claim 44, wherein said detection means uses a marker firstly found within an image obtained by said camera.
-
46. The apparatus according to claim 44, wherein said detection means comprises means for searching an image of a present scene for a marker found in an image of a previous scene.
-
47. The apparatus according to claim 41, wherein a layout distribution density of the plurality of markers in the environment is set so that a density distribution of markers farther from the operator is set to be lower than a density of distribution markers closer to the operator.
-
48. The apparatus according to claim 41, wherein the first plurality of markers are arranged within the environment so that at least one marker is captured within the field of image of the camera.
-
49. The apparatus according to claim 41, wherein said detection means calculates a coordinate of the tracked marker in an image coordinate system.
-
50. The apparatus according to claim 41, wherein said detection means calculates a coordinate of the tracked marker in camera coordinate system.
-
51. The apparatus according to claim 41, wherein the first plurality of markers are depicted on a planar table arranged within the environment.
-
52. The apparatus according to claim 41, wherein said first plurality of markers are arranged in a three-dimensional manner.
-
53. The apparatus according to claim 41, wherein said detection means comprises:
identifying means for identifying a marker to be tracked from among said first plurality of markers.
-
54. The apparatus according to claim 53, wherein the identifying means identifies a marker in terms of an image coordinate system.
-
55. The apparatus according to claim 53, wherein the identifying means identifies a marker in terms of a world coordinate system.
-
56. A position/posture detection method for detecting an operation position of an operator so as to generate a three-dimensional virtual image associated with an operation to be done by the operator in a predetermined mixed reality environment, comprising:
-
the step of measuring to output an operator position/posture signal indicative of a three-dimensional position and posture of the operator;
the step of processing an image signal from a camera which captures a plurality of markers arranged in the environment, tracking at least one marker and detecting a coordinate of said at least one marker; and
outputting a head position/posture signal indicative of a position and posture of the head of the operator, including correction of the measured operator position/posture signal based on the coordinate values of the at least one tracked marker detected by said processing step. - View Dependent Claims (57, 58, 59, 60, 61)
tracking at least one marker by processing image signals sensed by a plurality of camera units mounted on the head of the operator, with a tri-angle measurement method.
-
-
60. A storage medium which stores a computer program that describes a method of claim 59.
-
61. A storage medium which stores a computer program that describes the method of claim 56.
-
62. A position/posture detection apparatus for detecting an operation position of an operator, comprising:
-
a position/posture sensor for measuring a three-dimensional position and posture of the operator to output an operator'"'"'s position and posture signal;
a camera sensing images of a first plurality of markers arranged at known positions in the environment;
detection means for processing an image signal from said camera, tracking a marker of the first plurality of markers, and detecting a coordinate value of the tracked marker in a coordinate system; and
correction means for correcting an output signal from the sensor on the basis of coordinate value of the tracked marker.
-
Specification