Apparatus and method for controlling interface
First Claim
1. An apparatus for controlling an interface, comprising:
- a receiver to receive image information comprising a depth image related to a user from a sensor;
a processor to determine a region of interest (ROI) based on depth information of the depth image, and generate, based on the image information, at least one of motion information regarding a hand motion within the ROI and gaze information regarding a gaze of the user; and
a controller to control a 2-dimensional or 3-dimensional graphical user interface(2D/3D GUI) based on at least one of the motion information and the gaze information,wherein a resolution of the ROI is calculated based on a resolution of the sensor, a distance between the user and the sensor, and a predetermined size of ROI window in the air.
1 Assignment
0 Petitions
Accused Products
Abstract
In an apparatus and method for controlling an interface, a user interface (UI) may be controlled using information on a hand motion and a gaze of a user without separate tools such as a mouse and a keyboard. That is, the UI control method provides more intuitive, immersive, and united control of the UI. Since a region of interest (ROI) sensing the hand motion of the user is calculated using a UI object that is controlled based on the hand motion within the ROI, the user may control the UI object in the same method and feel regardless of a distance from the user to a sensor. In addition, since positions and directions of view points are adjusted based on a position and direction of the gaze, a binocular 2D/3D image based on motion parallax may be provided.
12 Citations
22 Claims
-
1. An apparatus for controlling an interface, comprising:
-
a receiver to receive image information comprising a depth image related to a user from a sensor; a processor to determine a region of interest (ROI) based on depth information of the depth image, and generate, based on the image information, at least one of motion information regarding a hand motion within the ROI and gaze information regarding a gaze of the user; and a controller to control a 2-dimensional or 3-dimensional graphical user interface(2D/3D GUI) based on at least one of the motion information and the gaze information, wherein a resolution of the ROI is calculated based on a resolution of the sensor, a distance between the user and the sensor, and a predetermined size of ROI window in the air. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16)
-
-
17. An apparatus for controlling an interface, comprising:
-
a receiver to receive image information comprising a depth image related to a user from a sensor; a processor to generate, based on the image information, at least one of motion information regarding a hand motion of the user and gaze information regarding a gaze of the user; and a controller to control a 2-dimensional or 3-dimensional a graphical user interface(2D/3D GUI) based on at least one of the motion information and the gaze information, wherein the processor determines a region of interest (ROI) and generates the motion information within the ROI, and wherein the processor calculates a width of the ROI using Equations 1 and calculates a height of the ROI using Equations 2 as follows;
-
-
18. An apparatus for controlling an interface, comprising:
-
a receiver to receive image information related to a user from a sensor; a generator to generate the 2D/3D GUI based on the image information; and an outputter to output the 2D/3D GUI to a display apparatus, wherein the generator comprises; a view point adjustment unit to extract information regarding a left eye position of a left eye of the user and regarding a right eye position of a right eye of the user from the image information, to adjust a position of a left view point corresponding to the left eye position, and to adjust a position of a right view point corresponding to the right eye position; a 2D/3D scene rendering unit to render a left 2D/3D scene based on the left view point position and to render a right 2D/3D scene based on the right view point position; and a 2D/3D GUI generation unit to generate the 2D/3D GUI by combining the rendered left 2D/3D scene and the rendered right 2D/3D scene, wherein, when a plurality of users are within a sensing range of the sensor and, among the plurality of users, the main user does not exist, the view point adjustment unit extracts information on an average position of left eyes of the plurality of users and an average position of right eyes of the plurality of users, adjusts the left view point position corresponding to the average position of the left eyes, and adjusts the right view point position corresponding to the average position of the right eyes. - View Dependent Claims (19)
-
-
20. A method of controlling an interface, comprising:
-
receiving, by a processor, image information comprising a depth image related to a user; determining, by the processor, a region of interest (ROI) based on depth information of the depth image; generating, by the processor, based on the image information, at least one of motion information regarding a hand motion within the ROI and gaze information regarding a gaze of the user; and controlling, by the processor, a 2D/3D GUI based on at least one of the motion information and the gaze information, wherein a resolution of the ROI is calculated based on a resolution of the sensor, a distance between the user and the sensor, and a predetermined size of ROI window in the air. - View Dependent Claims (21, 22)
-
Specification