System and method for virtual object placement
First Claim
1. A system for manipulating virtually displayed objects using a virtual agent represented in a virtual environment, comprising:
- a display;
at least one input device adapted to receive at least one of speech, gesture, text and touchscreen inputs; and
a computer processor adapted to execute a program stored in a computer memory, the program being operable to provide instructions to the computer processor including;
receiving user input via the at least one input device, wherein the user input underspecifies a command for a virtual agent within the virtual environment to use in moving at least one target object in the virtual environment;
interfacing with the virtual environment via the virtual agent;
sensing, by the virtual agent, the at least one object in the virtual environment; and
finding at least one valid location for the virtual agent to place the at least one object in the virtual environment, wherein determining at least one valid location includes retrieving a linguistic placement constraint, identifying the virtual agent'"'"'s intent for a target object, identifying a placement preference for the virtual agent, determining one or more object properties of the target object and determining candidate placement surfaces for placement of the target object, determining at least one candidate activity surface on which the virtual agent is located while interacting with the at least one target object, and, for all candidate placement surfaces and the at least one candidate activity surface, identifying all objects in the virtual environment that are in contact with any of the candidate placement surfaces or the at least one candidate activity surface.
1 Assignment
0 Petitions
Accused Products
Abstract
A computer system and method according to the present invention can receive multi-modal inputs such as natural language, gesture, text, sketch and other inputs in order to manipulate graphical objects in a virtual world. The components of an agent as provided in accordance with the present invention can include one or more sensors, actuators, and cognition elements, such as interpreters, executive function elements, working memory, long term memory and reasoners for object placement approach. In one embodiment, the present invention can transform a user input into an object placement output. Further, the present invention provides, in part, an object placement algorithm, along with the command structure, vocabulary, and the dialog that an agent is designed to support in accordance with various embodiments of the present invention.
25 Citations
19 Claims
-
1. A system for manipulating virtually displayed objects using a virtual agent represented in a virtual environment, comprising:
-
a display; at least one input device adapted to receive at least one of speech, gesture, text and touchscreen inputs; and a computer processor adapted to execute a program stored in a computer memory, the program being operable to provide instructions to the computer processor including; receiving user input via the at least one input device, wherein the user input underspecifies a command for a virtual agent within the virtual environment to use in moving at least one target object in the virtual environment; interfacing with the virtual environment via the virtual agent; sensing, by the virtual agent, the at least one object in the virtual environment; and finding at least one valid location for the virtual agent to place the at least one object in the virtual environment, wherein determining at least one valid location includes retrieving a linguistic placement constraint, identifying the virtual agent'"'"'s intent for a target object, identifying a placement preference for the virtual agent, determining one or more object properties of the target object and determining candidate placement surfaces for placement of the target object, determining at least one candidate activity surface on which the virtual agent is located while interacting with the at least one target object, and, for all candidate placement surfaces and the at least one candidate activity surface, identifying all objects in the virtual environment that are in contact with any of the candidate placement surfaces or the at least one candidate activity surface. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16)
-
-
17. A computer-implemented method, comprising:
-
receiving, by a computer engine, user input that underspecifies a command for a virtual agent within the virtual environment to use in moving at least one target object in a virtual environment; interfacing, via the computer engine, with the virtual environment; sensing, via a virtual agent associated with the computer engine, the at least one target object in the virtual environment; and determining at least one valid location for the virtual agent to place the at least one object in the virtual environment, wherein determining at least one valid location includes retrieving a linguistic placement constraint, identifying the virtual agent'"'"'s intent for a target object, identifying a placement preference for the virtual agent, determining one or more object properties of the target object and determining candidate placement surfaces for placement of the target object, determining at least one candidate activity surface on which the virtual agent is located while interacting with the at least one target object, and, for all candidate placement surfaces and the at least one candidate activity surface, identifying all objects in the virtual environment that are in contact with any of the candidate placement surfaces or the at least one candidate activity surface. - View Dependent Claims (18)
-
-
19. A system for manipulating virtually displayed objects using a virtual agent in a virtual environment, comprising:
-
a display; at least one input device adapted to receive at least one of speech, gesture, text and touchscreen inputs; and a computer processor adapted to execute a program stored in a computer memory, the program being operable to provide instructions to the computer processor including; receiving user input via the at least one input device, wherein the user input underspecifies a command for a virtual agent within the virtual environment to use in moving at least one target object in the virtual environment; interfacing with the virtual environment via the virtual agent; sensing, by the virtual agent, the at least one object in the virtual environment; and finding at least one valid orientation for the virtual agent to place the at least one object in the virtual environment, wherein finding at least one valid orientation includes retrieving a linguistic placement constraint, identifying the virtual agent'"'"'s intent for a target object, identifying an orientation preference for the virtual agent, determining one or more object properties of the target object and determining candidate orientations for the target object, determining at least one candidate activity surface on which the virtual agent is located while interacting with the at least one target object, and, for the at least one candidate activity surface, identifying all objects in the virtual environment that are in contact with the at least one candidate activity surface.
-
Specification