Vehicle systems and methods for determining a target based on a virtual eye position and a pointing direction
First Claim
1. A vehicle, comprising:
- a user detection system configured to output a gesture signal in response to a hand of a user performing at least one gesture to indicate a final target position;
a user gaze monitoring system configured to output an eye location signal that indicates an actual eye position of the user;
one or more processors; and
one or more non-transitory memory modules communicatively coupled to the one or more processors and storing machine-readable instructions that, when executed, cause the one or more processors to perform at least the following;
determine a first point and a second point located on the hand of the user based at least in part on the gesture signal from the user detection system, wherein the first point and the second point define a pointing axis of the hand of the user;
calculate a virtual eye position based at least in part on the first point located on the hand of the user and the actual eye position;
determine a midpoint measured between a pair of virtual eyes;
determine a vector that originates at the midpoint and intersects the first point located on the hand of the user, wherein the vector represents a virtual gaze direction of the user;
calculate a first target position based on the virtual eye position;
calculate a second target position based on the pointing axis of the hand of the user;
determine the final target position based on the first target position and the second target position; and
control at least one vehicle system based at least in part on the final target position.
2 Assignments
0 Petitions
Accused Products
Abstract
Vehicle systems and methods for determining a target position are disclosed. A vehicle includes a user detection system configured to output a gesture signal in response to a hand of a user performing at least one gesture to indicate a final target position. The vehicle also includes a user gaze monitoring system configured to output an eye location signal that indicates an actual eye position of the user. The vehicle also includes one or more processors and one or more non-transitory memory modules communicatively coupled to the processors. The processors store machine-readable instructions that, when executed, cause the one or more processors to determine a first point and a second point located on the hand of the user based at least in part on the gesture signal from the user detection system. The first point and the second point define a pointing axis of the hand of the user.
25 Citations
9 Claims
-
1. A vehicle, comprising:
-
a user detection system configured to output a gesture signal in response to a hand of a user performing at least one gesture to indicate a final target position; a user gaze monitoring system configured to output an eye location signal that indicates an actual eye position of the user; one or more processors; and one or more non-transitory memory modules communicatively coupled to the one or more processors and storing machine-readable instructions that, when executed, cause the one or more processors to perform at least the following; determine a first point and a second point located on the hand of the user based at least in part on the gesture signal from the user detection system, wherein the first point and the second point define a pointing axis of the hand of the user; calculate a virtual eye position based at least in part on the first point located on the hand of the user and the actual eye position; determine a midpoint measured between a pair of virtual eyes; determine a vector that originates at the midpoint and intersects the first point located on the hand of the user, wherein the vector represents a virtual gaze direction of the user; calculate a first target position based on the virtual eye position; calculate a second target position based on the pointing axis of the hand of the user; determine the final target position based on the first target position and the second target position; and control at least one vehicle system based at least in part on the final target position. - View Dependent Claims (2, 3)
-
-
4. A vehicle, comprising:
-
a user detection system configured to output a gesture signal in response to a hand of a user performing at least one gesture to indicate a final target position; a user gaze monitoring system configured to output an eye location signal that indicates an actual eye position of the user; one or more processors; and one or more non-transitory memory modules communicatively coupled to the one or more processors and storing machine-readable instructions that, when executed, cause the one or more processors to perform at least the following; determine a first point and a second point located on the hand of the user based at least in part on the gesture signal from the user detection system, wherein the first point and the second point define a pointing axis of the hand of the user; determine a vector by extending a line segment representing the pointing axis beyond the hand of the user, wherein the vector represents a pointing direction; determine a presence of an object located in an environment surrounding the vehicle that intersects with the vector representing the pointing direction; calculate a virtual eye position based at least in part on the first point located on the hand of the user and the actual eye position; calculate a first target position based on the virtual eye position; calculate a second target position based on the pointing axis of the hand of the user; identify the object that intersects with the vector as the second target position; determine the final target position based on the first target position and the second target position; and control at least one vehicle system based at least in part on the final target position. - View Dependent Claims (5, 6)
-
-
7. A method of determining a final target position that a user of a vehicle is gesturing towards, the method comprising:
-
determining, by a computer, a first point and a second point located on a hand of the user based at least in part on a gesture signal generated by a user detection system, wherein the first point and the second point define a pointing axis of the hand of the user; calculating a virtual eye position based at least in part on the first point located on the hand of the user and an actual eye position of the user, wherein an eye location signal generated by a user gaze monitoring system generates the actual eye position; determining a midpoint measured between a pair of virtual eyes; determining a vector that originates at the midpoint and intersects the first point located on the hand of the user, wherein the vector represents a virtual gaze direction of the user; calculating, by the computer, a first target position based on the virtual eye position; calculating, by the computer, a second target position based on the pointing axis of the hand of the user; determining the final target position based on the first target position and the second target position by the computer; and controlling at least one vehicle system based at least in part on the final target position. - View Dependent Claims (8, 9)
-
Specification