Altering a view perspective within a display environment
First Claim
1. A method for processing a gesture input for controlling an avatar movement on a display, the method comprising:
- based at least upon determining a gesture indicative of a desire to define a central position of a user, defining, by a processor, a central area of a physical area, defining a first region of the physical area, and defining a second region of the physical area, the first region and the second region each defined with respect to a location on a floor in the central area, the central area, the first region and the second region each being three-dimensional and discrete from each other;
receiving, by the processor, first image data representing the user located in the first region, and determining, by the processor, from the first image data that the user has performed a gesture within the first region with a first portion of the user'"'"'s body;
receiving, by the processor, second image data representing the user located in the second region, and determining, by the processor, from the second image data that the user has performed the gesture within the second region with the first portion of the user'"'"'s body;
receiving, by the processor, third image data representing the user located in the first region, and determining, by the processor, from the third image data that the user has performed the gesture within the first region with a second portion of the user'"'"'s body;
processing, by the processor, the gesture in a first manner to display and control a movement of the avatar on the display based at least on the gesture having been recognized to have been performed within the first region with a first portion of the user'"'"'s body;
processing, by the processor, the gesture in a second manner to display and control a movement of the avatar on the display based at least on the gesture having been recognized to have been performed within the second region with the first portion of the user'"'"'s body;
processing, by the processor, the gesture in a third manner to display and control a movement of the avatar on the display based at least on the gesture having been recognized to have been performed within the first region with the second portion of the user'"'"'s body;
subsequently after processing the gesture to display and control the movement of the avatar in the first manner, in the second manner, or in the third manner on the display,receiving, by the processor, fourth image data representing the user located at least in part in the central area, and determining from the fourth image data that the user is located at least in part in the central area;
determining, by the processor, based on the fourth image data, that the user has moved at least in part to the central area from the first region or the second region;
halting, by the processor, the corresponding movement of the avatar in the first manner, in the second manner, or in the third manner on the display based at least on determining that the user has moved at least in part to the central area from the first region or the second region after performing the gesture in the first region or the second region; and
based at least upon determining, by the processor, a gesture indicative of a desire to redefine the central position of the user, redefining the central area of the physical area, redefining the first region of the physical area, and redefining the second region of the physical area.
2 Assignments
0 Petitions
Accused Products
Abstract
Disclosed herein are systems and methods for altering a view perspective within a display environment. For example, gesture data corresponding to a plurality of inputs may be stored. The input may be input into a game or application implemented by a computing device. Images of a user of the game or application may be captured. For example, a suitable capture device may capture several images of the user over a period of time. The images may be analyzed and processed for detecting a user'"'"'s gesture. Aspects of the user'"'"'s gesture may be compared to the stored gesture data for determining an intended gesture input for the user. The comparison may be part of an analysis for determining inputs corresponding to the gesture data, where one or more of the inputs are input into the game or application and cause a view perspective within the display environment to be altered.
274 Citations
18 Claims
-
1. A method for processing a gesture input for controlling an avatar movement on a display, the method comprising:
-
based at least upon determining a gesture indicative of a desire to define a central position of a user, defining, by a processor, a central area of a physical area, defining a first region of the physical area, and defining a second region of the physical area, the first region and the second region each defined with respect to a location on a floor in the central area, the central area, the first region and the second region each being three-dimensional and discrete from each other; receiving, by the processor, first image data representing the user located in the first region, and determining, by the processor, from the first image data that the user has performed a gesture within the first region with a first portion of the user'"'"'s body; receiving, by the processor, second image data representing the user located in the second region, and determining, by the processor, from the second image data that the user has performed the gesture within the second region with the first portion of the user'"'"'s body; receiving, by the processor, third image data representing the user located in the first region, and determining, by the processor, from the third image data that the user has performed the gesture within the first region with a second portion of the user'"'"'s body; processing, by the processor, the gesture in a first manner to display and control a movement of the avatar on the display based at least on the gesture having been recognized to have been performed within the first region with a first portion of the user'"'"'s body; processing, by the processor, the gesture in a second manner to display and control a movement of the avatar on the display based at least on the gesture having been recognized to have been performed within the second region with the first portion of the user'"'"'s body; processing, by the processor, the gesture in a third manner to display and control a movement of the avatar on the display based at least on the gesture having been recognized to have been performed within the first region with the second portion of the user'"'"'s body; subsequently after processing the gesture to display and control the movement of the avatar in the first manner, in the second manner, or in the third manner on the display, receiving, by the processor, fourth image data representing the user located at least in part in the central area, and determining from the fourth image data that the user is located at least in part in the central area; determining, by the processor, based on the fourth image data, that the user has moved at least in part to the central area from the first region or the second region; halting, by the processor, the corresponding movement of the avatar in the first manner, in the second manner, or in the third manner on the display based at least on determining that the user has moved at least in part to the central area from the first region or the second region after performing the gesture in the first region or the second region; and based at least upon determining, by the processor, a gesture indicative of a desire to redefine the central position of the user, redefining the central area of the physical area, redefining the first region of the physical area, and redefining the second region of the physical area. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A system for processing a gesture input for controlling an avatar movement on a display, comprising:
-
a processor; and a memory communicatively coupled to the processor when the system is operational, the memory bearing processor-executable instructions that, when executed on the processor, cause the system at least to; based at least upon determining a gesture indicative of a desire to define a central position of a user, define a central area of a physical area, define a first region of the physical area, and define a second region of the physical area, the first region and the second region each defined with respect to a location on a floor in the central area, the central area, the first region and the second region each being three-dimensional and discrete from each other; receive first image data representing the user located in the first region, and determine from the first image data that the user has performed a gesture within the first region with a first portion of the user'"'"'s body; receive second image data representing the user located in the second region, and determine from the second image data that the user has performed the gesture within the second region with the first portion of the user'"'"'s body; receive third image data representing the user located in the first region, and determine from the third image data that the user has performed the gesture within the first region with a second portion of the user'"'"'s body; process the gesture in a first manner to display and control a movement of the avatar on the display based at least on the gesture having been recognized to have been performed within the first region with the first portion of the user'"'"'s body; process the gesture in a second manner to display and control a movement of the avatar on the display based at least on the gesture having been recognized to have been performed within the second region with the first portion of the user'"'"'s body; process the gesture in a third manner to display and control a movement of the avatar on the display based at least on the gesture having been recognized to have been performed within the first region with the second portion of the user'"'"'s body; subsequently after processing the gesture to display and control the movement of the avatar in the first manner, in the second manner, or in the third manner on the display, receive fourth image data representing the user located at least in part in the central area, and determine from the fourth image data that the user is located at least in part in the central area; determine, based on the fourth image data, the user has moved at least in part to the central area from the first region or the second region; halt the corresponding movement of the avatar in the first manner, in the second manner, or in the third manner on the display based at least on determining that the user has moved at least in part to the central area from the first region or the second region after performing the gesture in the first region or the second region; and based at least upon determining a gesture indicative of a desire to redefine the central position of the user, redefine the central area of the physical area, redefining the first region of the physical area, and redefining the second region of the physical area. - View Dependent Claims (14, 15)
-
-
16. A computer-readable device that is not a signal, bearing computer-executable instructions that, when executed on a computer, cause the computer to perform operations for processing a gesture input for controlling an avatar movement on a display comprising:
-
based at least upon determining a gesture indicative of a desire to define a central position of a user, defining a central area of a physical area, defining a first region of the physical area, and defining a second region of the physical area, the first region and the second region each defined with respect to a location on a floor in the central area, the central area, the first region and the second region each being three-dimensional and discrete from each other; receiving first image data representing the user located in the first region, and determining from the first image data that the user has performed a gesture within the first region with a first portion of the user'"'"'s body; receiving second image data representing the user located in the second region, and determining from the second image data that the user has performed the gesture within the second region with the first portion of the user'"'"'s body; receiving third image data representing the user located in the first region, and determining from the third image data that the user has performed the gesture within the first region with a second portion of the user'"'"'s body; processing the gesture in a first manner to display and control a movement of the avatar on the display based at least on the gesture having been recognized to have been performed within the first region with the first portion of the user'"'"'s body; processing the gesture in a second manner to display and control a movement of the avatar on the display based at least on the gesture having been recognized to have been performed within the second region with the first portion of the user'"'"'s body; processing the gesture in a third manner to display and control a movement of the avatar on the display based at least on the gesture having been recognized to have been performed within the first region with the second portion of the user'"'"'s body; subsequently after processing the gesture to display and control the movement of the avatar in the first manner, in the second manner, or in the third manner on the display, receiving fourth image data representing the user located at least in part in the central area, and determining from the fourth image data that the user is located at least in part in the central area; determining, based on the fourth image data, the user has moved at least in part to the central area from the first region or the second region; halting the corresponding movement of the avatar in the first manner, in the second manner, or in the third manner on the display based at least on determining that the user has moved at least in part to the central area from the first region or the second region after performing the gesture in the first region or the second region; and based at least upon determining a gesture indicative of a desire to redefine the central position of the user, redefining the central area of the physical area, redefining the first region of the physical area, and redefining the second region of the physical area. - View Dependent Claims (17, 18)
-
Specification