Smart necklace with stereo vision and onboard processing
First Claim
1. A wearable neck device for providing optical character or image recognition information to a user, comprising:
- a band having a left portion, a right portion, and a central portion connecting the left portion and the right portion;
an inertial measurement unit (IMU) that is configured to detect IMU data including a body posture of the user;
at least one camera connected to the band, having a field of view, and configured to detect image data corresponding to a surrounding environment of the user;
a memory storing optical character or image recognition processing data corresponding to an algorithm or a set of instructions for identifying characters or images of documents;
a processor connected to the IMU, the memory and the at least one camera, and configured to;
determine, using the image data, a location of the user,determine, using the IMU data, the body posture of the user,detect, using the image data, a plurality of documents in the surrounding environment of the user based on the location of the user,determine, using the image data and IMU data, that the plurality of documents are of interest to the user based on the determined location and the determined body posture of the user relative to a location of the plurality of documents in the surrounding environment of the user, and in response;
recognize a document that is among the plurality of documents that are of interest and in the surrounding environment of the user,provide a description of the document that identifies the document to the user,receive a user selection of the document that is among the plurality of documents that are of interest and in the surrounding environment of the user in response to providing the description of the document to the user,select the document that is among the plurality of documents in the surrounding environment based on the user selection,adjust the field of view of the at least one camera such that the document is within the adjusted field of view,recognize, using the optical character or image recognition processing data, at least one of a character or an image of the document, anddetermine output data based on the at least one of the character or the image of the document; and
a speaker configured to provide audio information to the user based on the output data.
2 Assignments
0 Petitions
Accused Products
Abstract
A wearable neck device and a method of operating the wearable neck device are provided for outputting optical character recognition information to a user. The wearable neck device has at least one camera, and a memory storing optical character or image recognition processing data. A processor detects a document in the surrounding environment and adjusts the field of view of the at least one camera such that the detected document is within the adjusted field of view. The processor analyzes the image data within the adjusted field of view using the optical character or image recognition processing data. The processor determines output data based on the analyzed image data. A speaker of the wearable neck device provides audio information to the user based on the output data.
-
Citations
20 Claims
-
1. A wearable neck device for providing optical character or image recognition information to a user, comprising:
-
a band having a left portion, a right portion, and a central portion connecting the left portion and the right portion; an inertial measurement unit (IMU) that is configured to detect IMU data including a body posture of the user; at least one camera connected to the band, having a field of view, and configured to detect image data corresponding to a surrounding environment of the user; a memory storing optical character or image recognition processing data corresponding to an algorithm or a set of instructions for identifying characters or images of documents; a processor connected to the IMU, the memory and the at least one camera, and configured to; determine, using the image data, a location of the user, determine, using the IMU data, the body posture of the user, detect, using the image data, a plurality of documents in the surrounding environment of the user based on the location of the user, determine, using the image data and IMU data, that the plurality of documents are of interest to the user based on the determined location and the determined body posture of the user relative to a location of the plurality of documents in the surrounding environment of the user, and in response; recognize a document that is among the plurality of documents that are of interest and in the surrounding environment of the user, provide a description of the document that identifies the document to the user, receive a user selection of the document that is among the plurality of documents that are of interest and in the surrounding environment of the user in response to providing the description of the document to the user, select the document that is among the plurality of documents in the surrounding environment based on the user selection, adjust the field of view of the at least one camera such that the document is within the adjusted field of view, recognize, using the optical character or image recognition processing data, at least one of a character or an image of the document, and determine output data based on the at least one of the character or the image of the document; and a speaker configured to provide audio information to the user based on the output data. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A method for providing optical character or image recognition information to a user of a wearable neck device having at least one camera with a field of view, the method comprising:
-
storing, in a memory, optical character or image recognition processing data corresponding to an algorithm or a set of instructions for identifying characters or images of documents; detecting, using the at least one camera, image data corresponding to a surrounding environment of the user; detecting, using an inertial measurement unit (IMU), IMU data that includes a body posture of the user; determining, using a processor connected to the memory and the at least one camera, a location of the user based on the image data; detecting, using the processor connected to the memory and the IMU, a body posture of the user based on the IMU data; detecting, using the processor, a plurality of documents in the surrounding environment of the user based on the location of the user; determining, using the processor, that the plurality of documents are of interest to the user based on the body posture and the location of the user relative to a location of the plurality of documents in the surrounding environment of the user, and in response; recognizing, using the processor, a document among the plurality of documents that are of interest and in the surrounding environment of the user; providing, using the processor, a description of the document to the user; selecting, using the processor, the document that is among the plurality of documents that are of interest and in the surrounding environment of the user based on a user selection; adjusting, using the processor, the field of view of the at least one camera such that the document is within the adjusted field of view; recognizing, using the optical character or image recognition processing data, at least one of a character or an image of the document; determining, using the processor, output data based on the at least one of the character or the image of the document; and outputting, using a speaker, audio information to the user based on the output data. - View Dependent Claims (13, 14, 15, 16)
-
-
17. A neck worn device for assisting a user having visual impairment, comprising:
-
a housing having a left end, a right end and a center portion positioned between the left end and the right end; an inertial measurement unit (IMU) that is configured to detect IMU data including a body posture of the user; a left side camera mounted proximal to the left end and configured to detect image data; a right side camera mounted proximal to the right end and configured to detect image data, the left side camera and the right side camera forming a pair of stereo cameras; a memory coupled to the housing and configured to store an optical character recognition software program; a processor positioned within the housing and coupled to the IMU, the left side camera, the right side camera and the memory and configured to; determine, using the image data detected by the right side and the left side cameras, a location of the user, determine, using the IMU data, the body posture of the user, detect a plurality of documents in a surrounding environment of the neck worn device based on the location of the user, determine, using the image data detected by the right side and the left side cameras and the IMU data, that the plurality of documents are of interest to the user based on the body posture of the user and the location of the user relative to a location of the plurality of documents in the surrounding environment, and in response; recognize a document that is among the plurality of documents that are of interest and in the surrounding environment, provide a description of the document that identifies the document to the user, receive a user selection of the document that is among the plurality of documents that are of interest and in the surrounding environment, select the document that is among the plurality of documents in the environment based on user selection, adjust a field of view of at least one of the left side camera or the right side camera such that the document is within the field of view, identify characters on the document using the optical character recognition software program, and generate a feedback signal based on the identified characters; and a speaker configured to provide audio information based on the feedback signal. - View Dependent Claims (18, 19, 20)
-
Specification