3-dimensional-model-processing apparatus, 3-dimensional-model processing method and program-providing medium
First Claim
1. A 3-dimensional model-processing apparatus capable of processing a 3-dimensional model appearing on a display unit comprising:
- a user controllable 3-dimensional sensor capable of generating information on a position and a posture of an appearance on the display unit; and
a controller having a grasp-state-setting process of taking a relation between said sensor-generated information on a position and a posture of said 3-dimensional sensor and information on a position and a posture of said 3-dimensional model appearing on said display unit as a constraint relation on the basis of one of (a) a relation between a 3-dimensional position of said 3-dimensional model appearing on said display unit and the 3-dimensional position of a 3-dimensional tool appearing for said 3-dimensional sensor on said display unit and (b) a relation between a 3-dimensional posture of said 3-dimensional model appearing on said display unit and a 3-dimensional posture of said 3-dimensional tool appearing for said 3-dimensional sensor on said display unit,wherein said controller has a configuration for selecting a 3-dimensional model crossed by a straight optical beam generated in a specific direction by said tool appearing on said display unit and for carrying out said grasp-state-setting process on said selected 3-dimensional model.
1 Assignment
0 Petitions
Accused Products
Abstract
Disclosed is a 3-dimensional model-processing apparatus capable of processing a 3-dimensional model appearing on a display unit including a sensor for generating information on the position and the posture, which can be controlled by the user arbitrarily, and control means for carrying out a grasp-state-setting process of taking a relation between the sensor-generated information on the position and the posture of the sensor and information on the position and the posture of the 3-dimensional model appearing on the display unit as a constraint relation on the basis of a relation between the 3-dimensional position of the 3-dimensional model appearing on the display unit and the 3-dimensional position of a tool appearing on the display unit for the sensor or on the basis of a relation between the 3-dimensional posture of the 3-dimensional model appearing on the display unit and the 3-dimensional posture of the tool appearing on the display unit for the sensor.
-
Citations
16 Claims
-
1. A 3-dimensional model-processing apparatus capable of processing a 3-dimensional model appearing on a display unit comprising:
-
a user controllable 3-dimensional sensor capable of generating information on a position and a posture of an appearance on the display unit; and a controller having a grasp-state-setting process of taking a relation between said sensor-generated information on a position and a posture of said 3-dimensional sensor and information on a position and a posture of said 3-dimensional model appearing on said display unit as a constraint relation on the basis of one of (a) a relation between a 3-dimensional position of said 3-dimensional model appearing on said display unit and the 3-dimensional position of a 3-dimensional tool appearing for said 3-dimensional sensor on said display unit and (b) a relation between a 3-dimensional posture of said 3-dimensional model appearing on said display unit and a 3-dimensional posture of said 3-dimensional tool appearing for said 3-dimensional sensor on said display unit, wherein said controller has a configuration for selecting a 3-dimensional model crossed by a straight optical beam generated in a specific direction by said tool appearing on said display unit and for carrying out said grasp-state-setting process on said selected 3-dimensional model.
-
-
2. A 3-dimensional model-processing apparatus capable of processing a 3-dimensional model appearing on a display unit comprising:
-
a user controllable 3-dimensional sensor capable of generating information on a position and a posture of an appearance on the display unit; and a controller having a grasp-state-setting process of taking a relation between said sensor-generated information on a position and a posture of said 3-dimensional sensor and information on a position and a posture of said 3-dimensional model appearing on said display unit as a constraint relation on the basis of one of (a) a relation between a 3-dimensional position of said 3-dimensional model appearing on said display unit and the 3-dimensional position of a 3-dimensional tool appearing for said 3-dimensional sensor on said display unit and (b) a relation between a 3-dimensional posture of said 3-dimensional model appearing on said display unit and a 3-dimensional posture of said 3-dimensional tool appearing for said 3-dimensional sensor on said display unit, wherein said controller has a configuration for selecting a 3-dimensional model hit by a flying object emanating in a specific direction from said tool appearing on said display unit and for carrying out said grasp-state-setting process on said selected 3-dimensional model. - View Dependent Claims (3, 4, 5, 6)
-
-
7. A 3-dimensional model-processing apparatus capable of processing a 3-dimensional model appearing on a display unit comprising:
-
a user controllable 3-dimensional sensor capable of generating information on a position and a posture of an appearance on the display unit; and controller having a grasp-state-setting process of taking a relation between said sensor-generated information on a position and a posture of said 3-dimensional sensor and information on a position and a posture of said 3-dimensional model appearing on said display unit as a constraint relation on the basis of one of (a) a relation between a 3-dimensional position of said 3-dimensional model appearing on said display unit and the 3-dimensional position of a 3-dimensional tool appearing for said 3-dimensional sensor on said display unit and (b) a relation between a 3-dimensional posture of said 3-dimensional model appearing on said display unit and a 3-dimensional posture of said 3-dimensional tool appearing for said 3-dimensional sensor on said display unit, wherein said controller has a configuration for identifying the area of a 3-dimensional model appearing on said display unit by recognizing a bounding-box area comprising said 3-dimensional model, an internal area of said 3-dimensional model or a bounding sphere displayed as a smallest sphere including said 3-dimensional model.
-
-
8. A 3-dimensional model-processing apparatus capable of processing a 3-dimensional model appearing on a display unit comprising:
-
a user controllable 3-dimensional sensor capable of generating information on a position and a posture of an appearance on the display unit; and a controller having a grasp-state-setting process of taking a relation between said sensor-generated information on a position and a posture of said 3-dimensional sensor and information on a position and a posture of said 3-dimensional model appearing on said display unit as a allowing a user to arbitrarily control information on a position and a posture of an appearance on the display unit, which is generated by a 3-dimensional sensor; and carrying out a grasp-state-setting process of taking a relation between said sensor-generated information on a position and a posture of said 3-dimensional sensor and information on a position and a posture of said 3-dimensional model appearing on said display unit as a constraint relation on either the basis of a relation between a 3-dimensional position of said 3-dimensional model appearing on said display unit and the 3-dimensional position of a 3-dimensional tool appearing for said 3-dimensional sensor on said display unit or on the basis of a relation between a 3-dimensional posture of said 3-dimensional model appearing on said display unit and a 3-dimensional posture of said 3-dimensional tool appearing for said 3-dimensional sensor on said display unit, wherein carrying out said grasp-state-setting process comprises; selecting a 3-dimensional model hit by a flying object emanating in a specific direction from said tool appearing on said display unit; and carrying out said grasp-state-setting process on said selected 3-dimensional model.
-
-
9. A 3-dimensional model-processing method capable of processing a 3-dimensional model appearing on a display unit comprising:
-
allowing a user to arbitrarily control information on a position and a posture of an appearance on the display unit, which is generated by a 3-dimensional sensor; and carrying out a grasp-state-setting process of taking a relation between said sensor-generated information on a position and a posture of said 3-dimensional sensor and information on a position and a posture of said 3-dimensional model appearing on said display unit as a constraint relation on either the basis of a relation between a 3-dimensional position of said 3-dimensional model appearing on said display unit and the 3-dimensional position of a 3-dimensional tool appearing for said 3-dimensional sensor on said display unit or on the basis of a relation between a 3-dimensional posture of said 3-dimensional model appearing on said display unit and a 3-dimensional posture of said 3-dimensional tool appearing for said 3-dimensional sensor on said display unit, wherein carrying out said grasp-state-setting process comprises; selecting a 3-dimensional model crossed by a straight optical beam generated in a specific direction by said tool appearing on said display unit; and carrying out said grasp-state-setting process on said selected 3-dimensional model.
-
-
10. A 3-dimensional model-processing method capable of processing a 3-dimensional model appearing on a display unit comprising:
-
executing control to display said flying object stuck on a surface of a 3-dimensional model appearing on said display unit when said flying object hits said 3-dimensional model; and carrying out processing to return said flying object from a stuck position on said surface of said 3-dimensional model to a position of said tool when a cancel input is received by said 3-dimensional sensor provided in advance. - View Dependent Claims (11, 12, 13, 14)
-
-
15. A 3-dimensional model-processing method capable of processing a 3-dimensional model appearing on a display unit comprising:
-
allowing a user to arbitrarily control information on a position and a posture of an appearance on the display unit, which is generated by a 3-dimensional sensor; and carrying out a grasp-state-setting process of taking a relation between said sensor-generated information on a position and a posture of said 3-dimensional sensor and information on a position and a posture of said 3-dimensional model appearing on said display unit as a constraint relation on either the basis of a relation between a 3-dimensional position of said 3-dimensional model appearing on said display unit and the 3-dimensional position of a 3-dimensional tool appearing for said 3-dimensional sensor on said display unit or on the basis of a relation between a 3-dimensional posture of said 3-dimensional model appearing on said display unit and a 3-dimensional posture of said 3-dimensional tool appearing for said 3-dimensional sensor on said display unit, wherein at carrying out said grasp-state-setting process the area of a 3-dimensional model appearing on said display unit is identified by recognizing a bounding-box area comprising one of said 3-dimensional model, an internal area of said 3-dimensional model and a bounding sphere displayed as a smallest sphere including said 3-dimensional model.
-
-
16. A 3-dimensional model-processing method capable of processing a 3-dimensional model appearing on a display unit comprising:
-
allowing a user to arbitrarily control information on a position and a posture of an appearance on the display unit, which is generated by a 3-dimensional sensor; and carrying out a grasp-state-setting process of taking a relation between said sensor-generated information on a position and a posture of said 3-dimensional sensor and information on a position and a posture of said 3-dimensional model appearing on said display unit as a constraint relation on either the basis of a relation between a 3-dimensional position of said 3-dimensional model appearing on said display unit and the 3-dimensional position of a 3-dimensional tool appearing for said 3-dimensional sensor on said display unit or on the basis of a relation between a 3-dimensional posture of said 3-dimensional model appearing on said display unit and a 3-dimensional posture of said 3-dimensional tool appearing for said 3-dimensional sensor on said display unit, wherein at carrying out said grasp-state-setting process control is executed to display a specific 3-dimensional model subjected to said grasp-state-setting process by distinguishing said specific 3-dimensional model from other 3-dimensional models appearing on said display unit.
-
Specification