3-D selection and manipulation with a multiple dimension haptic interface
First Claim
1. A method for selecting an object in a three-dimensional modeling environment, the method comprising the steps of:
- generating a three-dimensional modeling environment containing one or more virtual objects and a three-dimensional cursor;
determining a first three-dimensional cursor position in said three dimensional modeling environment, said three-dimensional cursor position corresponding to a position of an input device having at least three degrees of freedom;
representing a first view of at least one of said one or more virtual objects in a first two-dimensional display space;
representing said three-dimensional cursor position in said two-dimensional display space; and
selecting one of said virtual objects based on a positional correspondence of said object and said cursor in said two-dimensional display space.
5 Assignments
0 Petitions
Accused Products
Abstract
Systems and methods provide a user the ability to select three-dimensional virtual objects in a three-dimensional modeling environment using two-dimensional representations of the objects. In broad overview, the invention involves a multidimensional degree of freedom haptic interface that controls a three-dimensional cursor. A user employs the cursor to select an arbitrary point on a three-dimensional virtual object of interest. Through the application of a mathematical transformation, the system displays the cursor at the location of the selected point on the object. The user can manipulate the object by operating the haptic interface. The systems and methods provide the user with the possibility of editing the selected virtual object. In one embodiment, editing includes sculpting the object. When the user releases the object after manipulation is completed, the cursor is relocated to the position the cursor would have had had the manipulations been applied to the cursor directly.
243 Citations
79 Claims
-
1. A method for selecting an object in a three-dimensional modeling environment, the method comprising the steps of:
-
generating a three-dimensional modeling environment containing one or more virtual objects and a three-dimensional cursor;
determining a first three-dimensional cursor position in said three dimensional modeling environment, said three-dimensional cursor position corresponding to a position of an input device having at least three degrees of freedom;
representing a first view of at least one of said one or more virtual objects in a first two-dimensional display space;
representing said three-dimensional cursor position in said two-dimensional display space; and
selecting one of said virtual objects based on a positional correspondence of said object and said cursor in said two-dimensional display space. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42)
selecting a local origin point on said selected virtual object; and
defining a mathematical transformation in said three-dimensional modeling environment, said mathematical transformation representative of a difference in location of said local origin point and said three-dimensional cursor position.
-
-
12. The method of claim 11, wherein said local origin point is an arbitrary point on said object.
-
13. The method of claim 11, wherein defining said mathematical transformation comprises defining a vector having a component directed orthogonal to said two-dimensional display space.
-
14. The method of claim 11, wherein defining said mathematical transformation comprises defining a mathematical transformation having at least one of a three-dimensional translational vector, a rotation about said local origin point, and a rotation about said three-dimensional cursor position.
-
15. The method of claim 11, further comprising the steps of:
-
applying said transformation; and
manipulating said virtual object, said manipulation of said virtual object corresponding to a manipulation of said input device by the user.
-
-
16. The method of claim 11, further comprising the step of manipulating said virtual object, said manipulation of said virtual object corresponding to a manipulation of said input device by the user combined with an application of said transformation.
-
17. The method of claim 15, wherein said manipulation of said input device comprises at least one of a translational degree of freedom and a rotational degree of freedom.
-
18. The method of claim 17, wherein said manipulation of said input device comprises a simultaneous manipulation of two or more independent degrees of freedom.
-
19. The method of claim 17, wherein said manipulation of said input device comprises a simultaneous manipulation of three or more independent degrees of freedom.
-
20. The method of claim 17, wherein said manipulation of said input device comprises a simultaneous manipulation of six or more independent degrees of freedom.
-
21. The method of claim 15, further comprising the step of relocating said three-dimensional cursor to the location of the local origin point by application of the mathematical transformation.
-
22. The method of claim 21, wherein the relocating step is performed only during a duration of the manipulation.
-
23. The method of claim 22, further comprising the step of providing a visual aid to help a user select and manipulate said virtual object.
-
24. The method of claim 23, wherein providing said visual aid comprises providing a user-activated constraint limiting a point to a locus aligned to an axis of said three-dimensional modeling environment.
-
25. The method of claim 15, further comprising the step of moving-said three dimensional cursor to a position the cursor would have if manipulation of said input device by a user had been applied directly to said three dimensional cursor.
-
26. The method of claim 25, wherein the moving step is performed upon a command issued by the user.
-
27. The method of claim 26, wherein said command is a release of said selected virtual object.
-
28. The method of claim 15, further comprising the step of providing a visual aid to help a user select and manipulate said virtual object.
-
29. The method of claim 28, wherein providing said visual aid comprises providing a user-activated constraint limiting a point to a locus aligned to an axis of said three-dimensional modeling environment.
-
30. The method of claim 28, wherein providing said visual aid comprises providing a context-specific visual aid consistent with user-defined geometrical limitations.
-
31. The method of claim 28, wherein providing said visual aid comprises representing a second view of at least one of said one or more virtual objects in a second two-dimensional display space, said first two-dimensional display space and said second two-dimensional display space corresponding to different planes of said three-dimensional modeling environment.
-
32. The method of claim 31, wherein representing said second view comprises representing said second view on said second two-dimensional display space whose plane is orthogonal to a plane of said first two-dimensional display space.
-
33. The method of any of claims 1, 5, 8, 11, 15, 16, 21, 23, 25, or 28, wherein said input device comprises a haptic device.
-
34. The method of claim 33, wherein said haptic device comprises a haptic device providing force feedback to actuators operating in at least three degrees of freedom.
-
35. The method of claim 34, further comprising the step of providing a haptic aid to help a user select and manipulate said virtual object.
-
36. The method of claim 35, wherein said haptic aid comprises provision of dynamic friction force during said positional correspondence of said object and said cursor in said two-dimensional display space.
-
37. The method of claim 35, wherein, during an activation of said user-activated visual constraint limiting a point to a locus aligned to an axis of said three-dimensional modeling environment, said haptic aid comprises a haptic constraint limiting generally motion of the three-dimensional cursor to directions aligned to an axis of said three dimensional environment within a region of radius R about an identified point.
-
38. The method of claim 37, further comprising the step of contemporaneously displaying a visual aid component that indicates an axial location of said cursor along an axis.
-
39. The method of claim 35, wherein, during an activation of said user-activated visual constraint limiting a point to a selected line, said haptic aid comprises a haptic constraint limiting motion of the three-dimensional cursor to said line.
-
40. The method of claim 39, further comprising the step of contemporaneously displaying a visual aid component that indicates location of an axial location of said cursor along an axis.
-
41. The method of claim 35, wherein, during an activation of said user-activated visual constraint limiting a point to a selected plane, said haptic aid comprises a haptic constraint limiting motion of the three-dimensional cursor to said plane.
-
42. The method of claim 41, further comprising the step of contemporaneously displaying a visual aid component that indicates location of said plane.
-
43. An apparatus that permits a user to select an object in a three-dimensional modeling environment, comprising:
-
a computer that supports, a three-dimensional modeling environment application;
an input device that provides user input to said computer, said input device having at least three degrees of freedom;
a modeling module that, when operating, generates said three-dimensional modeling environment using said computer, said three-dimensional modeling environment adapted to model one or more virtual objects and to employ a three-dimensional cursor; and
a selection module responsive to user commands that, when operating, selects one of said virtual objects based on a two-dimensional positional correspondence of said object and said cursor. - View Dependent Claims (44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79)
a cursor tracking module that, when operating, determines a position of said three-dimensional cursor in said three dimensional modeling environment, said position of said cursor corresponding to a position of said input device;
an object tracking module that, when operating, tracks a local-origin point on said selected virtual object; and
a transformation module that, when operating, defines a mathematical transformation in said three-dimensional modeling environment, said mathematical transformation representative of a difference in location of said local origin point and said three-dimensional cursor position at a time the user selects said virtual object.
-
-
52. The apparatus of claim 51, wherein said transformation module defines said mathematical transformation in terms of at least one of a three-dimensional translational vector, a rotation about said local origin point, and a rotation about said three-dimensional cursor position.
-
53. The apparatus of claim 51, further comprising:
an object manipulation module that, when operating, manipulates said virtual object, said manipulation of said virtual object corresponding to a manipulation of said input device by the user combined with an application of said transformation.
-
54. The apparatus of claim 53, wherein said object manipulation module represents said manipulation of said input device using at least one of a translational degree of freedom and a rotational degree of freedom.
-
55. The apparatus of claim 53, wherein said object manipulation module is adapted to manipulate at least two independent degrees of freedom simultaneously.
-
56. The apparatus of claim 53, wherein said object manipulation module is adapted to manipulate at least three independent degrees of freedom sirmultaneously.
-
57. The apparatus of claim 53, wherein said object manipulation module is adapted to manipulate at least six independent degrees of freedom simultaneously.
-
58. The apparatus of claim 53, further comprising a relocation module that, when operating, relocates said three-dimensional cursor to said location of said local origin point by application of said mathematical transformation.
-
59. The apparatus of claim 58, wherein said relocation module is operative only during duration of the manipulation.
-
60. The apparatus of claim 59, further comprising a visual aid module that, when operating, provides a visual aid to help the user select and manipulate said virtual object.
-
61. The apparatus of claim 60, wherein said visual aid module is responsive to a user command, said visual aid module constraining a display of a point manipulated by a user to a locus aligned to an axis of said three-dimensional modeling environment.
-
62. The apparatus of claim 51, further comprising a cursor repositioning module that, when operating, moves said three dimensional cursor to a position the cursor would have if said manipulation of said input device by the user had been applied directly to said three dimensional cursor.
-
63. The apparatus of claim 62, wherein said cursor repositioning module operates in response to a command issued by the user.
-
64. The apparatus of claim 63, wherein said command is a release of said selected virtual object.
-
65. The apparatus of claim 54, further comprising a visual aid module that, when operating, provides a visual aid to help the user select and manipulate said virtual object.
-
66. The apparatus of claim 65, wherein said visual aid module is responsive to a user command, said visual aid module constraining a display of a point manipulated by the user to a locus aligned to an axis of said three-dimensional modeling environment.
-
67. The apparatus of claim 65, wherein said visual aid module is responsive to a user command, said visual aid module constraining a display of a point manipulated by the user to a locus consistent with user-defined geometrical limitations.
-
68. The apparatus of claim 65, wherein said visual aid module represents a second view of at least one of said one or more virtual objects in a second two-dimensional display space, said first two-dimensional display space and said second two-dimensional display space corresponding to different planes of said three-dimensional modeling environment.
-
69. The apparatus of claim 68, wherein said visual aid module represents said second view on said second two-dimensional display space whose plane is orthogonal to a plane of said first two-dimensional display space.
-
70. The apparatus of any of claims 43, 44, 45, 49, 51, 53, 58, 60, 62, or 65, wherein said input device comprises a haptic device.
-
71. The apparatus of claim 70, wherein said haptic device comprises a haptic device having force feedback actuators operating in at least three degrees of freedom to apply force to the user.
-
72. The apparatus of claim 71, further comprising a haptic aid module to help the user select and manipulate said virtual object.
-
73. The apparatus of claim 72, wherein said haptic aid module computes a dynamic friction force to be applied to the user by said haptic device during a positional correspondence of said object and said cursor in two dimensions of said three-dimensional modeling environment.
-
74. The apparatus of claim 72, wherein, during an activation of said user-activated visual constraint limiting a point to a locus aligned to an axis of said three-dimensional modeling environment, said haptic aid module activates at least one of said force feedback actuators to provide haptic force to the user upon deviation of said point from said locus.
-
75. The apparatus of claim 74, wherein said visual aid module is additionally adapted to display a visual aid component that indicates a location of said cursor along an axis.
-
76. The apparatus of claim 72, wherein, during an activation of said user-activated visual constraint limiting a point to a selected line, said haptic aid module activates at least one of said force feedback actuators to provide haptic force to the user upon deviation of said point from said line.
-
77. The apparatus of claim 76, wherein said visual aid module is additionally adapted to display a visual aid component that indicates a location of said cursor along a line.
-
78. The apparatus of claim 72, wherein, during an activation of said user-activated visual constraint limiting a point to a selected plane, said haptic aid module activates at least one of said force feedback actuators to provide haptic force to the user upon deviation of said point from said plane.
-
79. The apparatus of claim 78, wherein said visual aid module is additionally adapted to display a visual aid component that indicates a location of said cursor on a plane.
Specification