Input method designed for augmented reality goggles
First Claim
1. A viewing apparatus for a user comprising:
- at least one lens through which the user can visually perceive a field of view; and
a processor configured to;
receive first data indicative of a first proximity input within a first region with respect to the at least one lens, wherein the first proximity input is indicative of a location of a first touch by the user on the at least one lens;
perform a first operation in response to the received first data;
receive second data indicative of a second proximity input within a second region with respect to the at least one lens, wherein the second proximity input is indicative of a location of a second touch by the user on the at least one lens;
select an object within the field of view corresponding to the first data, wherein the object and the location of the first touch on the at least one lens are on a common line of sight of the user; and
perform a second operation pertaining to the object in response to the received second data, wherein the second operation is an operation from one or more operations presented in response to the received first data.
2 Assignments
0 Petitions
Accused Products
Abstract
Apparatuses, methods, systems and computer-readable media for using proximity inputs on or near a touch screen lens to select objects within a field of view are presented. In some embodiments, a viewing apparatus (e.g. head mounted display, augmented reality goggles) may include at least one lens, wherein the lens can sense touches or near-touches and output data indicative of a location of the proximity input by the user. A processor may receive the data and may select an object within the field of view of the user corresponding to the data, wherein the object and the location of the proximity input on or near the lens by the user are on a common line of sight. In some embodiments, the viewing apparatus may include at least one camera that is configured to record at least one image representative of the user'"'"'s field of view.
23 Citations
39 Claims
-
1. A viewing apparatus for a user comprising:
-
at least one lens through which the user can visually perceive a field of view; and a processor configured to; receive first data indicative of a first proximity input within a first region with respect to the at least one lens, wherein the first proximity input is indicative of a location of a first touch by the user on the at least one lens; perform a first operation in response to the received first data; receive second data indicative of a second proximity input within a second region with respect to the at least one lens, wherein the second proximity input is indicative of a location of a second touch by the user on the at least one lens; select an object within the field of view corresponding to the first data, wherein the object and the location of the first touch on the at least one lens are on a common line of sight of the user; and perform a second operation pertaining to the object in response to the received second data, wherein the second operation is an operation from one or more operations presented in response to the received first data. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18)
-
-
19. A method for use with a viewing apparatus, comprising:
-
receiving first data indicative of a first proximity input performed within a first region with respect to at least one lens of the viewing apparatus, a field of view being observable through the at least one lens by a user of the viewing apparatus, wherein the first proximity input includes a location of a first touch by the user on the at least one lens; performing a first operation in response to the received first data; receiving second data indicative of a second proximity input performed within a second region with respect to the at least one lens of the viewing apparatus, wherein the second proximity input includes a location of a second touch by the user on the at least one lens; selecting an object within the field of view corresponding to the first data, wherein the object and the location of the first touch on the at least one lens are on a common line of sight of the user; and performing a second operation pertaining to the object in response to the received second data, wherein the second operation is an operation from one or more operations presented in response to the received first data. - View Dependent Claims (20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34)
-
-
35. A viewing apparatus for a user comprising:
-
at least one viewing means through which the user can visually perceive a field of view; and means for receiving first data indicative of a first proximity input within a first region with respect to the at least one lens, wherein the first proximity input is indicative of a location of a touch by the user on the at least one lens; means for performing a first operation in response to the received first data; means for receiving second data indicative of a second proximity input within a second region with respect to the at least one lens, wherein the second proximity input is indicative of a second location of a touch by the user on the at least one lens; means for selecting an object within the field of view corresponding to the first data, wherein the object and the location of the touch on the at least one lens are on a common line of sight of the user; and means for performing a second operation pertaining to the object in response to the received second data, wherein the second operation is an operation from one or more operations presented in response to the received first data. - View Dependent Claims (36)
-
-
37. A computer program product residing on a non-transitory processor-readable medium and comprising processor-readable instructions configured to cause a processor to:
-
receive first data indicative of a first proximity input within a first region with respect to at least one lens through which a user can visually perceive a field of view, wherein the first proximity input is indicative of a first location of a touch by the user on the at least one lens; perform a first operation in response to the received first data; receive second data indicative of a second proximity input within a second region with respect to the at least one lens, wherein the second proximity input is indicative of a second location of a touch by the user on the at least one lens select an object within the field of view corresponding to the first data, wherein the object and the location of the touch on the at least one lens are on a common line of sight of the user; and perform a second operation pertaining to the object in response to the received second data, wherein the second operation is an operation from one or more operations presented in response to the received first data. - View Dependent Claims (38, 39)
-
Specification