USER INTERACTION PARADIGMS FOR A FLYING DIGITAL ASSISTANT
First Claim
Patent Images
1. An unmanned aerial vehicle (UAV) configured to capture aerial video of multiple subjects in a physical environment, the UAV comprising:
- an image capture device;
a wireless network interface configured for wireless communication with a plurality of mobile devices; and
a visual navigation system configured to;
receive, via the wireless network interface, an image capture request from a first mobile device of the plurality of mobile devices;
determine, in response to the image capture request, a position of the first mobile device relative to the UAV based at least in part on images of the physical environment received via the image capture device; and
generate control commands configured to cause the UAV to maneuver such that the image capture device tracks the position of the first mobile device.
3 Assignments
0 Petitions
Accused Products
Abstract
Methods and systems are described for new paradigms for user interaction with an unmanned aerial vehicle (referred to as a flying digital assistant or FDA) using a portable multifunction device (PMD) such as smart phone. In some embodiments, a user may control image capture from an FDA by adjusting the position and orientation of a PMD. In other embodiments, a user may input a touch gesture via a touch display of a PMD that corresponds with a flight path to be autonomously flown by the FDA.
-
Citations
48 Claims
-
1. An unmanned aerial vehicle (UAV) configured to capture aerial video of multiple subjects in a physical environment, the UAV comprising:
-
an image capture device; a wireless network interface configured for wireless communication with a plurality of mobile devices; and a visual navigation system configured to; receive, via the wireless network interface, an image capture request from a first mobile device of the plurality of mobile devices; determine, in response to the image capture request, a position of the first mobile device relative to the UAV based at least in part on images of the physical environment received via the image capture device; and generate control commands configured to cause the UAV to maneuver such that the image capture device tracks the position of the first mobile device. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14)
-
-
15. A method for capturing aerial video of multiple subjects in a physical environment by an unmanned aerial vehicle (UAV), the UAV including an image capture device, the method comprising:
-
receiving, by the UAV, a first wireless communication including an image capture request from a first mobile device of a plurality of mobile devices; determining, by the UAV, a position of the first mobile device relative to the UAV based at least in part on images of the physical environment received via the image capture device; and autonomously maneuvering, by the UAV, such that the image capture device tracks the position of the first mobile device. - View Dependent Claims (16, 17, 18)
-
-
19. An unmanned aerial vehicle (UAV) comprising:
-
a propulsion system; a power source coupled to the propulsion system; an image capture device; a data storage device configured to store images captured via the image capture device; a processing unit; and a memory unit having instructions stored thereon, which when executed by the processing unit, cause the UAV to; generate control commands configured to cause the UAV to maneuver through a physical environment using the propulsion system; capture, via the image capture device, video of the physical environment while the UAV maneuvers through the physical environment; calculate a remaining flight time based on a monitored level of the power source; calculate a remaining video recording time based on a monitored level of available storage space in the data storage device; and generate control commands configured to cause the UAV to automatically land in response to detecting that either the calculated remaining flight time or the calculated remaining video recording time is below a specified threshold.
-
-
20. An autonomous unmanned aerial vehicle (UAV) comprising:
-
a propulsion system; a power source coupled to the propulsion system; and an autonomous navigation system configured to; generate control commands configured to cause the UAV to autonomously maneuver through a physical environment using the propulsion system; monitor a level of the power source; and generate control commands configured to cause the UAV to automatically land in response to detecting that the level of the power source is below a first threshold. - View Dependent Claims (21, 22, 23, 24)
-
-
25. A method comprising:
-
calculating a remaining flight time of an unmanned aerial vehicle (UAV) based on a monitored level of a power source coupled to a propulsion system of the aerial vehicle; calculating a remaining video recording time based on a monitored level of available storage space in a data storage device coupled to an image capture device of the UAV; and generating control commands configured to cause the UAV to automatically land in response to detecting that either the calculated remaining flight time or the calculated remaining video recording time is below a specified threshold. - View Dependent Claims (26)
-
-
27. A method for capturing aerial video of a physical environment with synchronized distributed audio capture, the method comprising:
-
receiving, by a wireless network interface of a first mobile device, video of the physical environment captured by a camera of an unmanned aerial vehicle (UAV) in autonomous flight over the physical environment; capturing, by a first microphone of the first mobile device, audio of the physical environment; and synchronizing, by a processing unit of the first mobile device; the video of the physical environment captured by the camera of the UAV; and the audio of the physical environment captured by the first microphone of the first mobile device. - View Dependent Claims (28, 29)
-
-
30. A method for capturing aerial video of a physical environment with synchronized distributed audio capture, the method comprising:
-
capturing, by a camera coupled to an unmanned aerial vehicle (UAV), video of the physical environment; receiving, via a wireless network interface of the UAV, audio of the physical environment captured by a plurality of microphones of a distributed network of mobile devices; and synchronizing, by a processing unit of the UAV; the video of the physical environment captured by the camera of the UAV; and the audio of the physical environment captured by the plurality of microphones of the distributed network of mobile devices. - View Dependent Claims (31)
-
-
32. An unmanned aerial vehicle (UAV) configured to capture aerial video of a physical environment with synchronized distributed audio capture, the UAV comprising:
-
a camera; a wireless network interface configured for wireless communication with a plurality of mobile devices; a processing unit; and a memory having instructions stored thereon, which when executed by the processing unit, cause the UAV to; capture, via the camera, video of the physical environment; receive, via the wireless network interface, audio of the physical environment captured by a plurality of microphones of a distributed network of mobile devices; and synchronize the video of the physical environment captured by the camera of the UAV and the audio of the physical environment captured by the plurality of microphones of the distributed network of mobile devices. - View Dependent Claims (33)
-
-
34. A method comprising:
-
receiving sensor data gathered by sensors onboard an unmanned aerial vehicle (UAV) in flight through a physical environment; generating a three dimensional (3D) model of the physical environment based at least in part on the sensor data received from the UAV; determining a position and orientation of a display device relative to the physical environment; placing a virtual camera in the 3D model of the physical environment at a position and orientation corresponding to the determined position and orientation of the display device; generating a visual representation of at least a portion of the 3D model from a field of view of the virtual camera; and causing display, via the display device, of the generated visual representation. - View Dependent Claims (35, 36, 37, 38, 39, 40, 41)
-
-
42. A method for implementing augmented reality (AR) based on sensor data collected by an unmanned aerial vehicle (UAV), the method comprising:
-
receiving sensor data gathered by sensors onboard the UAV in flight through a physical environment; determining a position and orientation of an AR display device relative to the physical environment; generating a graphical element based at least in part on the sensor data collected by the UAV and the determined position and orientation of the AR display device; and causing display, by the AR display device, of the generated graphical element. - View Dependent Claims (43, 44, 45, 46, 47, 48)
-
Specification