APPARATUS FOR PRESENTING MIXED REALITY SHARED AMONG OPERATORS
First Claim
1. A mixed reality presentation apparatus which generates a three-dimensional virtual image associated with a collaborative operation to be done by a plurality of operators in a predetermined mixed reality environment, and displays the generated virtual image on see-through display devices respectively attached to the plurality of operators, comprising:
- first sensor means for detecting a position of each of actuators which are operated by the plurality of operators and move as the collaborative operation progresses;
second sensor means for detecting a view point position of each of the plurality of operators in an environment of the collaborative operation; and
generation means for generating three-dimensional images for the see-through display devices of the individual operators, said generation means generating a three-dimensional virtual image representing an operation result of the collaborative operation that has progressed according to a change in position of each of the plurality of actuators detected by said first sensor means when viewed from the view point position of each operator detected by said second sensor means, and outputting the generated three-dimensional virtual image to each see-through display device.
2 Assignments
0 Petitions
Accused Products
Abstract
There is disclosed a mixed reality presentation apparatus which generates and displays a three-dimensional virtual image on a see-through display device so as to allow a plurality of players to play a multi-player game in a mixed reality environment. The apparatus has a CCD camera for detecting the mallet positions of the plurality of players, and a sensor for detecting the view point position of each player in the environment of the multi-player game. The apparatus generates a three-dimensional virtual image that represents a game result of the multi-player game that has progressed in accordance with changes in mallet position detected by the CCD camera and is viewed from the view point position of each player detected by the sensor, and outputs the generated image to the corresponding see-through display device. The apparatus determines the motion of each player by detecting infrared rays output from the corresponding mallet on the basis of an image captured by the CCD camera. The view point position detected by the sensor is corrected by specifying the marker in an image obtained by a camera attached to the head of each player, and comparing the marker position in that image with an actual marker position.
-
Citations
62 Claims
-
1. A mixed reality presentation apparatus which generates a three-dimensional virtual image associated with a collaborative operation to be done by a plurality of operators in a predetermined mixed reality environment, and displays the generated virtual image on see-through display devices respectively attached to the plurality of operators, comprising:
-
first sensor means for detecting a position of each of actuators which are operated by the plurality of operators and move as the collaborative operation progresses;
second sensor means for detecting a view point position of each of the plurality of operators in an environment of the collaborative operation; and
generation means for generating three-dimensional images for the see-through display devices of the individual operators, said generation means generating a three-dimensional virtual image representing an operation result of the collaborative operation that has progressed according to a change in position of each of the plurality of actuators detected by said first sensor means when viewed from the view point position of each operator detected by said second sensor means, and outputting the generated three-dimensional virtual image to each see-through display device. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 16)
-
-
9. A mixed reality presentation apparatus which generates a three-dimensional virtual image associated with a collaborative operation to be done by a plurality of operators in a predetermined mixed reality environment, and displays the generated virtual image on see-through display devices respectively attached to the plurality of operators, comprising:
-
a camera arranged so as to include a plurality of actuators operated by the plurality of operators in the collaborative operation within a field of view thereof;
actuator position detection means for outputting information relating to positions of the actuators associated with a coordinate system of that environment on the basis of an image sensed by said camera;
sensor means for detecting and outputting a view point position of each of the plurality of operators in the environment of the collaborative operation; and
image generation means for generating a three-dimensional virtual image of a progress result viewed from the view point position of each operator detected by said sensor means to each see-through display device so as to present the progress result of the collaborative operation that has progressed according to detected changes in position of the actuator to each operator.
-
-
10. A mixed reality presentation apparatus which generates a three-dimensional virtual image associated with a collaborative operation to be done by a plurality of operators in a predetermined mixed reality environment, and displays the generated virtual image on see-through display devices respectively attached to the plurality of operators, comprising:
-
a first camera which substantially includes the plurality of operators within a field of view thereof;
a first processor arranged so as to calculate operation positions of the plurality of operators on the basis of an image obtained by said first camera;
a detection device detecting a view point position of each operator using a plurality of sensors attached to the plurality of operators;
a plurality of second cameras sensing front fields of the individual operators, at least one second camera being attached to each of the plurality of operators;
a second processor calculating information associated with a line of sight of each operator on the basis of each of images from said plurality of second cameras;
a third processor correcting the view point position of each operator detected by the sensor using the line of sight information from said second processor and outputting the corrected view point position as a position on a coordinate system of the mixed reality environment;
a first image processing device making the collaborative operation virtually progress on the basis of the operation position of each operator calculated by said first processor, and generating three-dimensional virtual images representing results that have changed along with the progress of the collaborative operation for the plurality of operators; and
a second image processing device transferring coordinate positions of the three-dimensional virtual images for the individual operators generated by said first image processing device in accordance with the individual corrected view point positions calculated by said third processor, and outputting the coordinate-transferred images to the see-through display devices.
-
-
11. A method of generating a three-dimensional virtual image associated with a collaborative operation to be done within a predetermined mixed reality environment so as to display the image on see-through display devices attached to a plurality of operators in the mixed reality environment, comprising:
-
the image sensing step of sensing a plurality of actuators operated by the plurality of operators by a camera that includes the plurality of operators within a field of view thereof;
the actuator position acquisition step of calculating information relating to positions of the actuators associated with a coordinate system of the environment on the basis of the image sensed by the camera;
the view point position detection step of detecting a view point position of each of the plurality of operators in the environment of the collaborative operation on the coordinate system of the environment;
the progress step of making the collaborative operation virtually progress in accordance with changes in position of the plurality of actuators calculated in the actuator position acquisition step; and
the image generation step of outputting a three-dimensional virtual image of a progress result in the progress step viewed from the view point position of each operator detected in the view point position detection step to each see-through display device so as to present the progress result in the progress step to each operator. - View Dependent Claims (13, 14, 15, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40)
-
-
12. A mixed reality presentation method for generating a three-dimensional virtual image associated with a collaborative operation to be done by a plurality of operators in a predetermined mixed reality environment, and displaying the generated virtual image on see-through display devices respectively attached to the plurality of operators, comprising:
-
the first image sensing step of capturing an image using a first camera which substantially includes the plurality of operators within a field of view thereof;
the first detection step of detecting operation positions of the plurality of operators on the basis of the image sensed by the first camera;
the second detection step of detecting a view point position of each operator using a plurality of sensors respectively attached to the plurality of operators;
the second image sensing step of sensing a front field of each operator using each of a plurality of second cameras, at least one second camera being attached to each of the plurality of operators;
the line of sight calculation step of calculating information associated with a line of sight of each operator on the basis of each of images obtained from the plurality of second cameras;
the correction step of correcting the view point position of each operator detected by the sensor on the basis of the line of sight information calculated in the line of sight calculation step, and obtaining the corrected view point position as a position on a coordinate system of the mixed reality environment;
the generation step of making the collaborative operation virtually progress on the basis of the operation positions of the individual operators detected in the first detection step, and generating three-dimensional virtual images that represent results of the collaborative operation and are viewed from the view point positions of the plurality of operators; and
the step of transferring coordinate positions of the three-dimensional virtual images for the individual operators generated in the generation step in accordance with the individual corrected view point positions obtained in the correction step, and outputting the coordinate-transferred images to the see-through display devices.
-
-
17. A position/posture detection apparatus for detecting an operation position of an operator so as to generate a three-dimensional virtual image that represents an operation done by the operator in a predetermined mixed reality environment, comprising:
-
a position/posture sensor for measuring a three-dimensional position and posture of the operator to output an operator'"'"'s position and posture signal;
a camera sensing images of a first plurality of markers arranged at known positions in the environment;
detection means for processing an image signal from said camera, tracking a marker of the first plurality of markers, and detecting a coordinate value of the tracked marker in a coordinate system; and
calculation means for calculating a portion-position and -posture representing a position and posture of the operating portion, on the basis of the coordinate value of the tracked marker detected by said detection means and the operator'"'"'s position and posture signal outputted from the position/posture sensor.
-
-
41. A mixed reality presentation apparatus comprising:
-
a work table having a first plurality of markers arranged at known positions;
a position/posture sensor attached to an operator to detect a head posture of the operator;
a camera being set to capture at least one of the first plurality of markers within a field of view of the camera;
a detection means for processing an image signal from the camera, tracking a marker from among the first plurality of markers, and detecting a coordinate value of a tracked marker;
calculation means for calculating a position/posture signal representing a position and posture of the operator′
s view point, on the basis of the coordinate value of the tracked marker detected by said detection means and an operator'"'"'s head position/posture signal outputted from the position/posture sensor; and
generation means for generating a virtual image for presenting a mixed reality at the view point in accordance with the calculated position/posture signal. - View Dependent Claims (42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 57, 58, 59, 60, 61)
-
-
56. A position/posture detection method for detecting an operation position of an operator so as to generate a three-dimensional virtual image associated with an operation to be done by the operator in a predetermined mixed reality environment, comprising:
-
the step of measuring to output an operator position/posture signal indicative of a three-dimensional position and posture of the operator;
the step of processing an image signal from a camera which captures a plurality of markers arranged in the environment, tracking at least one marker and detecting a coordinate of said at least one marker; and
outputting a head position/posture signal indicative of a position and posture of the head of the operator, on the basis of the coordinate of the tracked marker and the measured operator position/posture signal.
-
-
62. A position/posture detection apparatus for detecting an operation position of an operator, comprising:
-
a position/posture sensor for measuring a three-dimensional position and posture of the operator to output an operator'"'"'s position and posture signal;
a camera sensing images of a first plurality of markers arranged at known positions in the environment;
detection means for processing an image signal from said camera, tracking a marker of the first plurality of markers, and detecting a coordinate value of the tracked marker in a coordinate system; and
correction means for correcting an output signal from the sensor on the basis of coordinate value of the tracked marker.
-
Specification