Proximity object tracker
First Claim
Patent Images
1. An electronic system comprising:
- an image sensor having a field of view of a first area;
an illumination source that is configured to illuminate a second area, the second area intersecting the first area to define (a) an intersection region illuminated by the illumination source and within the field of view of the image sensor and (b) a non-intersection region not illuminated by the illumination source and within the field of view of the image sensor; and
a processing unit configured to perform operations comprising;
receiving an image from the image sensor;
analyzing the image to detect an object within the intersection region and exclude objects within the non-intersection region; and
determining user input based on the object detected within the intersection region, wherein the user input is determined based on a mapped position of the detected object.
2 Assignments
0 Petitions
Accused Products
Abstract
Object tracking technology, in which controlling an illumination source is controlled to illuminate while a camera is capturing an image to define an intersection region within the image captured by the camera. The image captured by the camera is analyzed to detect an object within the intersection region. User input is determined based on the object detected within the intersection region and an application is controlled based on the determined user input.
39 Citations
30 Claims
-
1. An electronic system comprising:
-
an image sensor having a field of view of a first area; an illumination source that is configured to illuminate a second area, the second area intersecting the first area to define (a) an intersection region illuminated by the illumination source and within the field of view of the image sensor and (b) a non-intersection region not illuminated by the illumination source and within the field of view of the image sensor; and a processing unit configured to perform operations comprising; receiving an image from the image sensor; analyzing the image to detect an object within the intersection region and exclude objects within the non-intersection region; and determining user input based on the object detected within the intersection region, wherein the user input is determined based on a mapped position of the detected object. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. A method for determining a user input, comprising:
-
receiving an image from an image sensor, the image sensor having a field of view of a first area; and illuminating, via an illumination source, a second area intersecting the first area to define (a) an intersection region illuminated by the illumination source and within the field of view of the image sensor and (b) a non-intersection region not illuminated by the illumination source and within the field of view of the image sensor; analyzing the image to detect an object within the intersection region and exclude objects within the non-intersection region; and determining user input based on the object detected within the intersection region, wherein the user input is determined based on a mapped position of the detected object. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19, 20)
-
-
21. An apparatus for determining a user input, comprising:
-
means for receiving an image from an image sensor, the image sensor having a field of view of a first area; and means for illuminating a second area intersecting the first area to define (a) an intersection region illuminated by the illuminating means and within the field of view of the image sensor and (b) a non-intersection region not illuminated by the illuminating means and within the field of view of the image sensor; means for analyzing the image to detect an object within the intersection region and exclude objects within the non-intersection region; and means for determining user input based on the object detected within the intersection region, wherein the user input is determined based on a mapped position of the detected object. - View Dependent Claims (22, 23, 24)
-
-
25. A non-transitory storage medium comprising processor-readable instructions configured to cause a processor to:
-
receive an image from an image sensor, the image sensor having a field of view of a first area; and illuminate, via an illumination source, a second area intersecting the first area to define (a) an intersection region illuminated by the illumination source and within the field of view of the image sensor and (b) a non-intersection region not illuminated by the illumination source and within the field of view of the image sensor; analyze the image to detect an object within the intersection region and exclude objects within the non-intersection region; and determine user input based on the object detected within the intersection region, wherein the user input is determined based on a mapped position of the detected object. - View Dependent Claims (26, 27, 28, 29, 30)
-
Specification