AUTONOMOUS GARDENING VEHICLE WITH CAMERA
First Claim
1. A method for generating scaled terrain information with an unmanned autonomous gardening vehicle, the gardening vehicle comprisinga driving unit comprisinga set of at least one drive wheel, anda motor connected to the at least one drive wheel for providing movability of the gardening vehicle;
- a gardening-tool; and
a camera for capturing images of a terrain, the camera being positioned and aligned in known manner relative to the gardening vehicle;
wherein the method comprises;
moving the gardening vehicle in the terrain whilst concurrently generating a set of image data by capturing an image series of terrain sections so that at least two images of the image series cover an amount of identical points in the terrain, wherein the terrain sections are defined by a viewing area of the camera at respective positions of the camera while moving;
applying a simultaneous localisation and mapping (SLAM) algorithm to the set of image data and thereby deriving terrain data, the terrain data comprising;
a point cloud representing the captured terrain, andposition data relating to a relative position of the gardening vehicle in the terrain; and
scaling the point cloud by applying an absolute scale information to the terrain data.
1 Assignment
0 Petitions
Accused Products
Abstract
Some embodiments described herein include a method for generating scaled terrain information with an unmanned autonomous gardening vehicle. In some embodiments the gardening vehicle includes a driving unit comprising a set of at least one drive wheel and a motor connected to the at least one drive wheel for providing movability of the gardening vehicle, a gardening-tool and a camera for capturing images of a terrain, the camera being positioned and aligned in known manner relative to the gardening vehicle. In context of the method the gardening vehicle is moved in the terrain while concurrently generating a set of image data by capturing an image series of terrain sections so that at least two (successive) images of the image series cover an amount of identical points in the terrain, wherein the terrain sections are defined by a viewing area of the camera at respective positions of the camera while moving.
71 Citations
20 Claims
-
1. A method for generating scaled terrain information with an unmanned autonomous gardening vehicle, the gardening vehicle comprising
a driving unit comprising a set of at least one drive wheel, and a motor connected to the at least one drive wheel for providing movability of the gardening vehicle; -
a gardening-tool; and a camera for capturing images of a terrain, the camera being positioned and aligned in known manner relative to the gardening vehicle; wherein the method comprises; moving the gardening vehicle in the terrain whilst concurrently generating a set of image data by capturing an image series of terrain sections so that at least two images of the image series cover an amount of identical points in the terrain, wherein the terrain sections are defined by a viewing area of the camera at respective positions of the camera while moving; applying a simultaneous localisation and mapping (SLAM) algorithm to the set of image data and thereby deriving terrain data, the terrain data comprising; a point cloud representing the captured terrain, and position data relating to a relative position of the gardening vehicle in the terrain; and scaling the point cloud by applying an absolute scale information to the terrain data. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16)
a distance measuring unit for measuring distances to an object by emitting laser light and receiving the laser light reflected at the object; a range camera; and an inertial measuring unit (IMU).
-
-
6. The method according to claim 5, wherein wheel comprises a drive wheel.
-
7. The method according to claim 1, wherein the absolute scale information is derived by capturing a reference image of a reference body of known appearance and/or position in the terrain and deriving the absolute scale information by image processing based on the appearance of the reference body in the captured reference image and of a known magnification ratio of the camera, wherein at least one of the dimensions, spatial orientation, and shape of the reference body is pre-known.
-
8. The method according to claim 1, wherein:
- transmitting the set of image data and/or the terrain data to a database and storing the data in it, wherein at least one of;
the gardening vehicle comprises a storing unit comprising the database, a remote controlling unit for controlling the gardening vehicle comprises the storing unit comprising the database, and the database is embodied by a data cloud stored on a remote server, wherein the terrain data is derived from the image data by cloud computing based on the simultaneous localisation and mapping (SLAM) algorithm and/or the scaling of the point cloud is performed by cloud computing.
- transmitting the set of image data and/or the terrain data to a database and storing the data in it, wherein at least one of;
-
9. The method according to claim 1, wherein a borderline of a working area for the gardening vehicle, within which the gardening vehicle is controllable to autonomously move and work, is defined by at least one of:
-
teaching the borderline by at least one of; moving the gardening vehicle along a desired path, capturing a series of border-images of terrain corresponding to the path and of a defined vicinity relative to the path, the series of border-images represents the image series, and providing the series of border-images for controlling the gardening vehicle, and setting the borderline on basis of a terrain map, wherein border-position data is provided for controlling the gardening vehicle.
-
-
10. The method according to claim 9, wherein in case the borderline is defined by teaching,
continuously comparing the border-images of the series of border-images with actual images by image processing, the actual images being captured while moving the gardening vehicle inside the working area, deriving a rate of matching for every actual image based on the comparison and controlling the movement of the gardening vehicle based on the rate of matching so that the gardening vehicle automatically moves only inside the working area, providing a movement-controlling command for adapting the movement direction of the gardening vehicle if the rate of matching exceeds a predetermined matching-threshold. -
11. The method according to claim 1, further comprising:
-
extracting at least one state parameter from the set of image data and/or from the terrain data which represents an actual state of at least one designated terrain section, the state relating to a state of at least one plant and ground, comparing the state parameter to a predetermined threshold for the respective state and deriving gardener information based on the comparison of the predetermined threshold and the state parameter.
-
-
12. The method according to claim 11, wherein the state parameter provides at least one terrain factor of a group of terrain factors, the group of terrain factors comprising at least one of:
-
plant height, plant growth, humidity of the terrain, density of plants, planarity of the terrain and brightness or colour of the terrain.
-
-
13. The method according to claim 11, wherein at least one of:
-
the gardener information is provided to a user of the gardening vehicle together with a related recommendation concerning a suggested treatment of the respective at least one designated terrain section, and the gardening-tool is applied based on the gardener information.
-
-
14. The method according to claim 13, wherein the gardening tool comprises at least one of a hedge-cutter, a tree-branch cutter, a grass-cutter, scissors, a fertilising unit, a pesticide unit, a watering unit, and a lawn thatcher.
-
15. The method according to claim 1, further comprising:
-
applying the gardening-tool based on the terrain data, in particular based on the point cloud, wherein the terrain data represents at least an actual shape of an object in the captured terrain, gardening data is provided representing a target shape for the object and the gardening-tool is guided based on the gardening data so that the actual shape of the object is transferred into the target shape, wherein the gardening vehicle is positioned at a designated position in the terrain based on the terrain data and the gardening-tool is guided according to a designated shape of a plant based on the terrain data and the gardening data.
-
-
16. The method according to claim 1, further comprising at least one of:
-
providing controlling information for controlling a further gardening unit in the terrain, wherein the controlling information is derived by the terrain data and an actual position and orientation of the further gardening unit in the terrain, wherein the position and orientation of the gardening unit is derived from the terrain data by data processing or by image processing of an image covering at least a part of the gardening unit and a part of the terrain, in particular wherein the gardening unit comprises a 6DoF-target providing the determination of the orientation of the gardening unit with six degrees of freedom by image processing, and creating a digital terrain map based on the point cloud.
-
-
17. An unmanned autonomous gardening vehicle comprising
a driving unit comprising: -
a set of at least one drive wheel, and a motor connected to the at least one drive wheel for providing movability of the gardening vehicle; a gardening-tool; a camera for capturing images of a terrain, the camera being positioned and aligned in known manner relative to the gardening vehicle; and a controlling unit for controlling the gardening vehicle, wherein the gardening vehicle providing a functionality adapted to generate scaled terrain information by performing the following; moving the gardening vehicle in the terrain whilst concurrently generating a set of image data by capturing an image series of terrain sections so that at least two images of the image series cover an amount of identical points in the terrain, wherein the terrain sections are defined by a viewing area of the camera at respective positions of the camera while moving, applying a simultaneous localisation and mapping (SLAM) algorithm to the set of image data and thereby deriving terrain data, the terrain data comprising; a point cloud representing the captured terrain and position data relating to a relative position of the gardening vehicle in the terrain, and scaling the point cloud by applying an absolute scale information to the terrain data. - View Dependent Claims (18, 19)
-
-
20. A non-transitory computer program product having computer-executable instructions for controlling and executing the method comprising:
-
moving the gardening vehicle in the terrain whilst concurrently generating a set of image data by capturing an image series of terrain sections so that at least two images of the image series cover an amount of identical points in the terrain, wherein the terrain sections are defined by a viewing area of the camera at respective positions of the camera while moving; applying a simultaneous localisation and mapping (SLAM) algorithm to the set of image data and thereby deriving terrain data, the terrain data comprising; a point cloud representing the captured terrain, and position data relating to a relative position of the gardening vehicle in the terrain; and scaling the point cloud by applying an absolute scale information to the terrain data.
-
Specification