Autonomous store tracking system
First Claim
1. An autonomous store tracking system, comprisinga processor configured toobtain a 3D model of a store that contains items and item storage areas;
- receive a time sequence of images from each camera of a plurality of cameras in said store, wherein said time sequence of images from each camera is captured over a time period;
analyze said time sequence of images and said 3D model of said store todetermine a sequence of locations of a person in said store during said time period; and
calculate a field of influence volume around each location of said sequence of locations;
when said field of influence volume intersects an item storage area of said item storage areas during an interaction time period within said time period,receive a first image from a camera in said store oriented to view said item storage area, wherein said first image is captured before or at the beginning of said interaction time period;
receive a second image from said camera in said store oriented to view said item storage area, wherein said second image is captured after or at the end of said interaction time period;
set an input of a neural network to said first image and said second image,wherein said neural network outputsa probability that each item of said items is moved during said interaction time period, anda probability that each action of a set of actions is performed during said interaction time period;
select an item from said items with a highest probability of being moved during said time period in an output of said neural network;
select an action from said set of actions with a highest probability of being performed during said time period in said output of said neural network, and,attribute said action and said item to said person.
1 Assignment
0 Petitions
Accused Products
Abstract
A system that analyzes camera images to track a person in an autonomous store, and to determine when a tracked person takes or moves items in the store. The system may associate a field of influence volume around a person'"'"'s location; intersection of this volume with an item storage area, such as a shelf, may trigger the system to look for changes in the items on the shelf. Items that are taken from, placed on, or moved on a shelf may be determined by a neural network that processes before and after images of the shelf. Person tracking may be performed by analyzing images from fisheye ceiling cameras projected onto a plane horizontal to the floor. Projected ceiling camera images may be analyzed using a neural network trained to recognize shopper locations. The autonomous store may include modular ceiling and shelving fixtures that contain cameras, lights, processors, and networking.
-
Citations
20 Claims
-
1. An autonomous store tracking system, comprising
a processor configured to obtain a 3D model of a store that contains items and item storage areas; -
receive a time sequence of images from each camera of a plurality of cameras in said store, wherein said time sequence of images from each camera is captured over a time period; analyze said time sequence of images and said 3D model of said store to determine a sequence of locations of a person in said store during said time period; and calculate a field of influence volume around each location of said sequence of locations; when said field of influence volume intersects an item storage area of said item storage areas during an interaction time period within said time period, receive a first image from a camera in said store oriented to view said item storage area, wherein said first image is captured before or at the beginning of said interaction time period; receive a second image from said camera in said store oriented to view said item storage area, wherein said second image is captured after or at the end of said interaction time period; set an input of a neural network to said first image and said second image, wherein said neural network outputs a probability that each item of said items is moved during said interaction time period, and a probability that each action of a set of actions is performed during said interaction time period; select an item from said items with a highest probability of being moved during said time period in an output of said neural network; select an action from said set of actions with a highest probability of being performed during said time period in said output of said neural network, and, attribute said action and said item to said person. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19)
-
-
20. An autonomous store tracking system, comprising:
-
a modular ceiling in a store comprising a longitudinal rail mounted to a ceiling of said store; one or more transverse rails, wherein each transverse rail of said one or more transverse rails is mounted to said longitudinal rail; one or more integrated lighting-camera modules mounted to said each transverse rail, wherein each integrated lighting-camera module of said one or more integrated lighting-camera modules comprises a lighting element surrounding a center area; and two or more ceiling-mounted cameras of a plurality of ceiling-mounted cameras of said store mounted in said center area; wherein a position of said each transverse rail along said longitudinal rail is adjustable; a position of said each integrated lighting-camera module along said each transverse rail is adjustable; said center area comprises a camera module comprising at least one slot into which said two or more ceiling-mounted cameras are attached; and a position of each ceiling-mounted camera of said two or more ceiling-mounted cameras in said at least one slot is adjustable; one or more modular shelves in said store, each modular shelf of said one or more modular shelves comprising at least one camera module mounted on a bottom side of said each modular shelf, wherein each camera module of said at least one camera module comprises two or more downward-facing cameras; at least one lighting module, wherein each lighting module of said at least one lighting module comprises a downward-facing light; a right-facing camera mounted on or proximal to a left edge of said each modular shelf; a left-facing camera mounted on or proximal to a right edge of said each modular shelf; a processor; and a network switch; wherein said each modular shelf is an item storage area for one or more items in said store; said each modular shelf comprises a front rail and a back rail onto which said each camera module and said each lighting module are attached; a position of said each camera module along said front rail and said back rail is adjustable; a position of said each lighting module along said front rail and said back rail is adjustable; said each camera module comprises at least one slot into which said two or more downward-facing cameras are attached; and a position of each downward-facing camera of said two or more downward-facing cameras in said at least one slot is adjustable; a processor configured to obtain a 3D model of said store; receive a time sequence of images from each camera of said plurality of ceiling-mounted cameras, wherein said time sequence of images from each camera is captured over a time period; project said time sequence of images from each ceiling camera onto a plane parallel to a floor of said store, to form a time sequence of projected images corresponding to each ceiling camera; analyze said time sequence of projected images corresponding to each ceiling camera, and said 3D model of said store to determine a sequence of locations of a person in said store during said time period; and calculate a field of influence volume around each location of said sequence of locations; when said field of influence volume intersects an item storage area of said item storage areas during an interaction time period within said time period, receive a first image from a camera in said store oriented to view said item storage area, wherein said first image is captured before or at the beginning of said interaction time period; receive a second image from said camera in said store oriented to view said item storage area, wherein said second image is captured after or at the end of said interaction time period; set an input of a neural network to said first image and said second image, wherein said neural network outputs a probability that each item of said items is moved during said interaction time period, and a probability that each action of a set of actions is performed during said interaction time period; select an item from said items with a highest probability of being moved during said time period in an output of said neural network; select an action from said set of actions with a highest probability of being performed during said time period in said output of said neural network, and, attribute said action and said item to said person.
-
Specification