Photography Recognition Translation
First Claim
1. A method for translating text in an electronic image, the method comprising:
- acquiring access to the electronic image by an electronic device;
receiving an indication through the electronic device of an area of interest in the electronic image;
identifying the area of interest;
recognizing characters in the electronic image, wherein the recognized characters are associated with the identified area of interest;
translating the recognized characters from a source language into a target language; and
displaying the translated characters in the target language on a display of the electronic device.
3 Assignments
0 Petitions
Accused Products
Abstract
Methods are described for efficient and substantially instant recognition and translation of text in photographs. A user is able to select an area of interest for subsequent processing. Optical character recognition (OCR) may be performed on the wider area than that selected for determining the subject domain of the text. Translation to one or more target languages is performed. Manual corrections may be made at various stages of processing. Variations of translation are presented and made available for substitution of a word or expression in the target language. Translated text is made available for further uses or for immediate access.
227 Citations
38 Claims
-
1. A method for translating text in an electronic image, the method comprising:
-
acquiring access to the electronic image by an electronic device; receiving an indication through the electronic device of an area of interest in the electronic image; identifying the area of interest; recognizing characters in the electronic image, wherein the recognized characters are associated with the identified area of interest; translating the recognized characters from a source language into a target language; and displaying the translated characters in the target language on a display of the electronic device.
-
-
2. The method of claim 1, wherein the translated characters represent a word, word combination, text or hieroglyph, and wherein recognizing characters in the electronic image includes recognizing by context or as a group.
-
3. The method of claim 1, wherein the method further comprises:
-
after recognizing characters in the electronic image, displaying the recognized characters on the display of the electronic device; and before translating the recognized characters into the target language, receiving an indication associated with translation of the recognized characters.
-
-
4. The method of claim 1, wherein translating is performed on characters included in a sentence or paragraph associated with the area of interest, wherein the sentence or paragraph may be partially outside a boundary of the area of interest.
-
5. The method of claim 3, wherein translating characters outside a boundary of the area of interest may include a translation of all text in the electronic image.
-
6. The method of claim 1, wherein acquiring access to the electronic image may be performed through a camera of the electronic device.
-
7. The method of claim 6, wherein acquiring access to the electronic image by the electronic device is through detecting an interaction with a control of the camera of the electronic device.
-
8. The method of claim 1, wherein the method further comprises:
after acquiring access to the electronic image, displaying at least a portion of the electronic image on a display of an electronic device, and wherein identifying the area of interest includes receiving an indication through the electronic device corresponding to the area of interest on the displayed electronic image.
-
9. The method of claim 1, wherein the method further comprises:
-
after recognizing the characters in the electronic image and prior to translating the recognized characters, displaying the result of optical character recognition on the screen of electronic device; and prior to translating the recognized characters, receiving an indication corresponding to a correction to one or more recognized characters.
-
-
10. The method of claim 9, wherein the method further comprises:
after receiving the indication corresponding to the correction to one or more recognized characters, making the correction to the recognized characters, and displaying the corrected recognized characters.
-
11. The method of claim 9, wherein the correction of recognized characters may be done automatically based on alternative variants of recognition, and wherein the indication corresponding to the correction causes insertion of an appropriate variant in the recognized text, or by manual inputting correct variant from the keyboard.
-
12. The method of claim 1, wherein recognition only occurs for characters associated with the area of interest.
-
13. The method of claim 1, wherein recognition is performed on characters included in a sentence or paragraph, wherein the sentence or paragraph is associated with the area of interest, wherein the sentence or paragraph may be partially outside a boundary of the area of interest, wherein area outside a boundary of the area of interest may include the entire electronic image.
-
14. The method of claim 1, wherein the translated characters are associated with the indicated area of interest.
-
15. The method of claim 1, wherein the translation of recognized characters is performed according to an identified subject domain, wherein the subject domain is determined based on a result of recognition and translation of characters from an area that is larger in at least one dimension than the indicated area of interest.
-
16. The method of claim 15, wherein the translation is performed in accordance with the identified subject domain, wherein the subject domain is determined based on a result of recognition and translation of only the selected area of interest.
-
17. The method of claim 15, wherein the subject domain may be chosen automatically based on a history of translations or a history of corrections of translation of other images accessed by the electronic device.
-
18. The method of claim 15, wherein the subject domain for translation is identified based on data content resident on the electronic device.
-
19. The method of claim 15, wherein the subject domain for translation is identified based on geolocation data.
-
20. The method of claim 1, wherein the method further includes determining a language for recognizing said characters in the electronic image, wherein the language for recognizing said characters is determined based on geolocation data.
-
21. The method of claim 20, wherein the determination of the language for recognizing said characters further comprises:
-
establishing the coordinates of location of the electronic device by a navigation module; searching and acquiring from a database a country or region in based on data acquired from the navigation module; searching and determining from a database a list of one or more languages that are used in the acquired country or region; and making the list of one or more languages available for use in recognizing said characters in the electronic image.
-
-
22. The method of claim 1, wherein the source language for translation is determined based on geolocation data.
-
23. The method of claim 1, wherein displaying the translated characters in the target language is performed only for characters associated with the area of interest.
-
24. The method of claim 1, wherein translation may be performed automatically by dictionary translation or machine translation, or by receiving on the electronic device a translation made by receiving a human-rendered translation.
-
25. The method of claim 1, wherein the method further comprises:
after displaying the translated characters, detecting an indication corresponding to a change for a translated portion of text.
-
26. An electronic device for translating text associated with an image, the electronic device including:
-
a power source; a display; a processor; a memory in electronic communication with said processor, the memory configured with instructions for performing a method, the method comprising; receiving by the electronic device an indication of an area of interest in relation to the image; identifying coordinates associated with the area of interest; recognizing characters in the electronic image, wherein the recognized characters are associated with the identified area of interest; translating the recognized characters from a source language into a target language; and displaying the translated characters in the target language on the display of the electronic device.
-
-
27. The electronic device of claim 26, wherein the method further comprises:
-
after recognizing characters in the image, displaying the recognized characters on the display of the electronic device; and before translating the recognized characters into the target language, receiving an indication to proceed with translation of the recognized characters.
-
-
28. The electronic device of claim 26, wherein translating is performed on characters included in a sentence or paragraph associated with the area of interest, wherein the sentence or paragraph may be partially outside a boundary of the area of interest.
-
29. The electronic device of claim 28, wherein translating characters outside a boundary of the area of interest may include a translation of all text in the electronic image.
-
30. The electronic device of claim 26, wherein the electronic device further includes a camera, and wherein the method further comprises, before receiving by the electronic device the indication of the area of interest, capturing the image through use of the camera.
-
31. The electronic device of claim 26, wherein the method further comprises:
before receiving by the electronic device the indication of the area of interest in relation to the image, displaying at least a portion of the image on the display, and wherein identifying the area of interest includes receiving an indication through the electronic device corresponding to the area of interest on the displayed image.
-
32. The electronic device of claim 26, wherein the method further comprises:
-
after recognizing the characters in the electronic image and prior to translating the recognized characters, displaying the result of optical character recognition on the screen of electronic device; and prior to translating the recognized characters, displaying an indication of a possible error associated with a recognized character, and receiving by the electronic device an indication corresponding to a correction to the recognized character.
-
-
33. The electronic device of claim 32, wherein displaying the indication of a possible error associated with said recognized character includes displaying one or more variants of recognition to correct the possible error, and wherein the method further comprises, in response to receiving the indication corresponding to the correction, insertion of an appropriate variant in the recognized text corresponding to the indication.
-
34. The electronic device of claim 26, wherein recognition only occurs for characters associated with the area of interest.
-
35. The electronic device of claim 26, wherein the translation of recognized characters is performed according to an identified subject domain, wherein the subject domain is determined based on a result of recognition and translation of characters from an area that is larger in at least one dimension than the indicated area of interest.
-
36. The electronic device of claim 35, wherein the translation is performed in accordance with the identified subject domain, wherein the subject domain is determined based on a result of recognition and translation of text only associated with the selected area of interest.
-
37. The electronic device of claim 35, wherein the subject domain may be chosen automatically based on a history of translations or a history of corrections of translation of other images accessed by the electronic device.
-
38. The electronic device of claim 26, wherein displaying the translated characters in the target language is performed only for characters associated with the area of interest.
Specification