Methods, apparatus and data structures for providing a user interface, which exploits spatial memory in three-dimensions, to objects and which provides spatialized audio
First Claim
Patent Images
1. A system which permits a user to interact with objects, the system comprising:
- a) an input facility for accepting user inputs;
b) a storage facility containingi) location and state information for each of the objects, wherein the state information for each of the objects includes an indication of whether or not the object is active,ii) a visual representation of each of the objects,iii) a cursor location,iv) a three-dimensional environment includinga three-dimensional surface, andv) a first audio cue;
c) a processing unit whichi) accepts user inputs from the input facility,ii) updates (a) the location and state information for each of the objects contained in the storage facility, and (b) the cursor location contained in the storage facility, based on the accepted user inputs, andiii) generates video outputs based onA) the location and state information for each of the objects,B) the visual representation of each of the objects,C) the cursor location, andD) the three-dimensional surface, contained in the storage facility;
d) a video display unit for rendering the video outputs generated by the processing unit; and
e) an audio output device,wherein the processing unit determines that an object is active if a cursor is on an object based on the cursor location and the location of the object,wherein, if an object is active and the input facility accepts a move input, theni) the processing unit updates the state and location of the object,ii) the processing unit generates a video output based on the updated location of the object,iii) the video display device renders the video output generated by the processing unit, andiv) the processing unit provides the first audio cue to the audio output device.
2 Assignments
0 Petitions
Accused Products
Abstract
A graphical user interface in which object thumbnails are rendered on a simulated three-dimensional surface which (i) exploits spatial memory and (ii) allows more objects to be rendered on a given screen. The objects may be moved, continuously, on the surface with a two-dimensional input device.
-
Citations
12 Claims
-
1. A system which permits a user to interact with objects, the system comprising:
-
a) an input facility for accepting user inputs; b) a storage facility containing i) location and state information for each of the objects, wherein the state information for each of the objects includes an indication of whether or not the object is active, ii) a visual representation of each of the objects, iii) a cursor location, iv) a three-dimensional environment including a three-dimensional surface, and v) a first audio cue; c) a processing unit which i) accepts user inputs from the input facility, ii) updates (a) the location and state information for each of the objects contained in the storage facility, and (b) the cursor location contained in the storage facility, based on the accepted user inputs, and iii) generates video outputs based on A) the location and state information for each of the objects, B) the visual representation of each of the objects, C) the cursor location, and D) the three-dimensional surface, contained in the storage facility; d) a video display unit for rendering the video outputs generated by the processing unit; and e) an audio output device, wherein the processing unit determines that an object is active if a cursor is on an object based on the cursor location and the location of the object, wherein, if an object is active and the input facility accepts a move input, then i) the processing unit updates the state and location of the object, ii) the processing unit generates a video output based on the updated location of the object, iii) the video display device renders the video output generated by the processing unit, and iv) the processing unit provides the first audio cue to the audio output device.
-
-
2. A man-machine interface method for permitting a user to act on objects, for use with a machine having a video display device and a user input device, the man-machine interface method comprising:
-
a) generating a three-dimensional environment, having a three-dimensional surface, to be rendered on the video display device; b) determining a virtual location environment of each of the objects in the three-dimensional environment; c) generating visual representations of the objects, within the three-dimensional environment, at the determined locations, to be rendered on the video display device; d) accepting inputs from the user input device; e) determining a cursor location based on the accepted inputs; f) generating the cursor at the determined cursor location, to be rendered on the video display device; g) if the cursor is located on a location of one of the objects, defining that object as an active object; h) when an object is defined as an active object, generating a first audio cue associated with object activation; and i) if the user input provides a selection input and if an active object exists; i1) generating an animation moving the visual representation of the associated object to a preferred viewing location, which makes the object appear much closer and therefore larger, to be rendered on the video display device; and i2) generating a second audio cue associated with object selection. - View Dependent Claims (12)
-
-
3. A man-machine interface method for permitting a user to act on objects, for use with a machine having a video display device and a user input device, the man-machine interface method comprising:
-
a) generating a three-dimensional environment, having a three-dimensional surface, to be rendered on the video display device; b) determining a virtual location environment of each of the objects in the three-dimensional environment; c) generating visual representations of the objects, within the three-dimensional environment, at the determined locations, to be rendered on the video display device; d) accepting inputs from the user input device; e) determining a cursor location based on the accepted inputs; f) generating the cursor at the determined cursor location, to be rendered on the video display device; g) if the cursor is located on a location of one of the objects, defining that object as an active object; h) when an object is defined as an active object, generating a first audio cue associated with object activation; and i) if the user input provides a move input and if an active object exists, then; i1) updating a location of the active object; i2) generating the visual representation of the active object at its updated location, to be rendered on the video display device; and i3) generating an audio cue associated with moving an object; and j) wherein the second audio cue is spatialized such that object movement in a virtual foreground of the three-dimensional environment is louder than object movement in a virtual background of the three-dimensional environment.
-
-
4. A man-machine interface method for permitting a user to act on objects, for use with a machine having a video display device and a user input device, the man-machine interface method comprising:
-
a) generating a three-dimensional environment, having a three-dimensional surface, to be rendered on the video display device; b) determining a virtual location environment of each of the objects in the three-dimensional environment; c) generating visual representations of the objects, within the three-dimensional environment, at the determined locations, to be rendered on the video display device; d) accepting inputs from the user input device; e) determining a cursor location based on the accepted inputs; f) generating the cursor at the determined cursor location, to be rendered on the video display device; g) if the cursor is located on a location of one of the objects, defining that object as an active object; h) when an object is defined as an active object, generating a first audio cue associated with object activation; and i) if the user input provides a move input and if an active object exists, then; i1) updating a location of the active object; i2) generating the visual representation of the active object at its updated location, to be rendered on the video display device; and i3) generating an audio cue associated with moving an object; and j) wherein the second audio cue is spatialized such that object movement in a left hand side or right hand side of the three dimensional environment generates a left or right balanced audio cue, respectively.
-
-
5. A man-machine interface method for permitting a user to act on objects, for use with a machine having a video display device and a user input device, the man-machine interface method comprising:
-
a) generating a three-dimensional environment, having a three-dimensional surface, to be rendered on the video display device; b) determining a virtual location environment of each of the objects in the three-dimensional environment; c) generating visual representations of the objects, within the three-dimensional environment, at the determined locations, to be rendered on the video display device; d) accepting inputs from the user input device; e) determining a cursor location based on the accepted inputs; f) generating the cursor at the determined cursor location, to be rendered on the video display device; g) if the cursor is located on a location of one of the objects, defining that object as an active object; h) when an object is defined as an active object, generating a first audio cue associated with object activation; and i) if the user input provides a move input and if an active object exists, then; i1) updating a location of the active object; i2) generating the visual representation of the active object at its updated location, to be rendered on the video display device; and i3) generating an audio cue associated with moving an object; and j) wherein the second audio cue has a pitch which is proportional to a speed at which the object moves in the interface rendered on the video display device.
-
-
6. A man-machine interface method for permitting a user to act on objects, for use with a machine having a video display device and a user input device, the man-machine interface method comprising:
-
a) generating a three-dimensional environment, having a three-dimensional surface, to be rendered on the video display device; b) determining a virtual location environment of each of the objects in the three-dimensional environment; c) generating visual representations of the objects, within the three-dimensional environment, at the determined locations, to be rendered on the video display device; d) accepting inputs from the user input device; e) determining a cursor location based on the accepted inputs; f) generating the cursor at the determined cursor location, to be rendered on the video display device; g) if the cursor is located on a location of one of the objects, defining that object as an active object; h) when an object is defined as an active object, generating a first audio cue associated with object activation; and i) if the user input provides a move input and if an active object exists, then; i1) updating a location of the active object; i2) generating the visual representation of the active object at its updated location, to be rendered on the video display device; and i3) generating an audio cue associated with moving an object; and j) wherein a reverberation ratio parameter of the second audio cue is adjusted based on a location of the object in the interface rendered on the video display device. - View Dependent Claims (7)
-
-
8. A man-machine interface method for permitting a user to act on objects, for use with a machine having a video display device and a user input device, the man-machine interface method comprising:
-
a) generating a three-dimensional environment, having a three-dimensional surface, to be rendered on the video display device; b) determining a virtual location environment of each of the objects in the three-dimensional environment; c) generating visual representations of the objects, within the three-dimensional environment, at the determined locations, to be rendered on the video display device; d) accepting inputs from the user input device; e) determining a cursor location based on the accepted inputs; f) generating the cursor at the determined cursor location, to be rendered on the video display device; g) if the cursor is located on a location of one of the objects, defining that object as an active object; h) when an object is defined as an active object, generating a first audio cue associated with object activation; and i) if the user input provides a move input and if an active object exists, then; i1) updating a location of the active object; i2) generating the visual representation of the active object at its updated location, to be rendered on the video display device; and i3) generating an audio cue associated with moving an object; and j) wherein high frequency components of the second audio cue are adjusted based on the location of the object. - View Dependent Claims (9)
-
-
10. A computer-based system having a man-machine interface, which permits a user to act on objects, for use in conjunction with a video display device and a user input device, the system comprising:
-
a processor; a memory connected to the processor and storing computer executable instructions therein; and wherein the processor, in response to execution of the instructions; a) generates a three-dimensional environment, having a three-dimensional surface, to be rendered on the video display device; b) determines a virtual location environment of each of the objects in the three-dimensional environment; c) generates visual representations of the objects, within the three-dimensional environment, at the determined locations, to be rendered on the video display device; d) accepts inputs from the user input device; e) determines a cursor location based on the accepted inputs; f) generates the cursor at the determined cursor location, to be rendered on the video display device; g) if the cursor is located on a location of one of the objects, defines that object as an active object; h) when an object is defined as an active object, generates a first audio cue associated with object activation; and i) if the user input provides a selection input and if an active object exists; i1) generates an animation moving the visual representation of the associated object to a preferred viewing location, which makes the object appear much closer and therefore larger, to be rendered on the video display device; and i2) generates a second audio cue associated with object selection. - View Dependent Claims (11)
-
Specification