Object search method and apparatus
First Claim
1. An object search method implemented by a terminal communicatively coupled to a server, the method comprising:
- receiving, by the terminal, voice input and gesture input from a user;
determining, by the terminal, a name of a target object for which the user expects to search and a characteristic category of the target object according to the voice input;
determining, by the terminal, whether the name of the target object and the characteristic category of the target object correspond to a preset category;
extracting, by the terminal, extracted characteristic information locally on the terminal according to the name of the target object, the characteristic category of the target object, and an image corresponding to the gesture input and sending the extracted characteristic information and the name of the target object to the server when the name of the target object and the characteristic category of the target object correspond to the preset category;
sending, by the terminal, the name of the target object, the characteristic category of the target object, and the image corresponding to the gesture input to the server to enable the server to extract the extracted characteristic information when the name of the target object and the characteristic category of the target object do not correspond to the preset category;
receiving, by the terminal, a search result from the server that corresponds to the extracted characteristic information and the image corresponding to the gesture input; and
displaying, by the terminal, the search result.
1 Assignment
0 Petitions
Accused Products
Abstract
An object search method and apparatus, where the method includes receiving voice input and gesture input that are of a user; determining, according to the voice input, a name of a target object for which the user expects to search and a characteristic category of the target object; extracting characteristic information of the characteristic category from an image area selected by the user by means of the gesture input; and searching for the target object according to the extracted characteristic information and the name of the target object. The solutions provided in the embodiments of the present disclosure can provide a user with a more flexible search manner, and reduce a restriction on an application scenario during a search.
-
Citations
20 Claims
-
1. An object search method implemented by a terminal communicatively coupled to a server, the method comprising:
-
receiving, by the terminal, voice input and gesture input from a user; determining, by the terminal, a name of a target object for which the user expects to search and a characteristic category of the target object according to the voice input; determining, by the terminal, whether the name of the target object and the characteristic category of the target object correspond to a preset category; extracting, by the terminal, extracted characteristic information locally on the terminal according to the name of the target object, the characteristic category of the target object, and an image corresponding to the gesture input and sending the extracted characteristic information and the name of the target object to the server when the name of the target object and the characteristic category of the target object correspond to the preset category; sending, by the terminal, the name of the target object, the characteristic category of the target object, and the image corresponding to the gesture input to the server to enable the server to extract the extracted characteristic information when the name of the target object and the characteristic category of the target object do not correspond to the preset category; receiving, by the terminal, a search result from the server that corresponds to the extracted characteristic information and the image corresponding to the gesture input; and displaying, by the terminal, the search result. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A terminal communicatively coupled to a server, comprising:
-
a receiver configured to receive voice input and gesture input from a user; a processor coupled to the receiver and configured to; determine, according to the voice input, a name of a target object for which the user expects to search and a characteristic category of the target object; determine whether the name of the target object and the characteristic category of the target object correspond to a preset category; extract extracted characteristic information locally on the terminal according to the name of the target object, the characteristic category of the target object, and an image corresponding to the gesture input and send the extracted characteristic information and the name of the target object to the server when the name of the target object and the characteristic category of the target object correspond to the preset category; send the name of the target object, the characteristic category of the target object, and the image corresponding to the gesture input to the server to enable the server to extract the extracted characteristic information when the name of the target object and the characteristic category of the target object do not correspond to the preset category; receive a search result from the server that corresponds to the extracted characteristic information and the image corresponding to the gesture input; and display the search result. - View Dependent Claims (13, 14)
-
-
15. An object search method implemented by a terminal communicatively coupled to a server, the method comprising:
-
receiving, by the terminal, voice input and gesture input from a user; determining, by the terminal, a name of a target object for which the user expects to search and a characteristic category of the target object according to the voice input; determining, by the terminal, whether the name of the target object and the characteristic category of the target object correspond to a preset category; extracting, by the terminal, extracted characteristic information locally on the terminal according to the name of the target object, the characteristic category of the target object, and an image corresponding to the gesture input and sending the extracted characteristic information and the name of the target object to the server when the name of the target object and the characteristic category of the target object correspond to the preset category; sending, by the terminal, the name of the target object, the characteristic category of the target object, and an image area that corresponds to the gesture input to the server to enable the server to extract the extracted characteristic information when the name of the target object and the characteristic category of the target object do not correspond to the preset category; and receiving, by the terminal, a search result from the server, the search result being obtained by the server by searching for the target object represented by the name of the target object, and a characteristic of the characteristic category that is of the image area and represented by the category information is used as a search criterion. - View Dependent Claims (16, 17)
-
-
18. A terminal communicatively coupled to a server, comprising:
-
a receiver configured to receive voice input and gesture input from a user; a processor coupled to the receiver and configured to; determine, according to the voice input, a name of a target object for which the user expects to search and a characteristic category of the target object; determine whether the name of the target object and the characteristic category of the target object correspond to a preset category; extract extracted characteristic information locally on the terminal according to the name of the target object, the characteristic category of the target object, and an image corresponding to the gesture input and send the extracted characteristic information and the name of the target object to the server when the name of the target object and the characteristic category of the target object correspond to the preset category; send, to the server, the name of the target object, the characteristic category of the target object, and an image area that corresponds to the gesture input to enable the server to extract the extracted characteristic information when the name of the target object and the characteristic category of the target object do not correspond to the preset category; and receive a search result from the server, the search result being obtained by the server by searching for the target object represented by the name of the target object, and a characteristic of the characteristic category that is of the image area and represented by the category information is used as a search criterion. - View Dependent Claims (19, 20)
-
Specification