INTELLIGENT AUTO-CROPPING OF IMAGES
First Claim
1. A computer-implemented method, comprising:
- receiving, by a computer system and from an image capture device, a first image mask of an image of a user, the image, and metadata about the image, the metadata indicating a plurality of unique values for one or more portions included in the image, the image capture device configured to capture a depth image of the user and a color mask image of the user, the depth image of the user including a three-dimensional (3D) representation of the user that is used by the image capture device to identify a depth of the user and a foreground location of the user with respect to a background of the image captured by the image capture device, the color mask image comprising a red, green, and blue (RGB) image of the user, the first image mask comprising the depth image and the color mask image;
extracting, by the computer system, a representation of the user and a representation of a floor region associated with the representation of the user from the image based at least in part on the first image mask and the metadata, the representation of the user comprised of a first subset of portions of the one or more portions included in the image and the representation of the floor region comprised of a second subset of portions of the one or more portions included in the image;
removing, by the computer system, a first area of the image with respect to the extracted representation of the user in the image based at least in part on the first image mask and the metadata thereby generating a second area of the image;
combining, by the computer system, the extracted representation of the user with the floor region of the image with respect to the second area of the image based at least in part on the first image mask and the metadata; and
displaying, by the computer system, a revised image of the user via a user interface of the computer system, the revised image of the user comprising the combination of the extracted representation of the user with the floor region of the image contained within the second area of the image.
1 Assignment
0 Petitions
Accused Products
Abstract
Techniques for providing an accurate auto-crop feature for images captured by an image capture device may be described herein. For example, one or more image masks for a color image captured by an image capture device may be received by a computer system. Metadata about the color image that identifies portions of the image as foreground and the color image itself may also be received by the computer system. Further, a representation of a user and a floor region associated with a user may be extracted from the color image using the one or more image masks and the metadata. A first area of the color image may be cropped with respect to the extracted representation of the user and the floor region associated with the user to generate a second area of the color image. In embodiments, a third area of the color image may be obscured based on the received metadata.
10 Citations
20 Claims
-
1. A computer-implemented method, comprising:
-
receiving, by a computer system and from an image capture device, a first image mask of an image of a user, the image, and metadata about the image, the metadata indicating a plurality of unique values for one or more portions included in the image, the image capture device configured to capture a depth image of the user and a color mask image of the user, the depth image of the user including a three-dimensional (3D) representation of the user that is used by the image capture device to identify a depth of the user and a foreground location of the user with respect to a background of the image captured by the image capture device, the color mask image comprising a red, green, and blue (RGB) image of the user, the first image mask comprising the depth image and the color mask image; extracting, by the computer system, a representation of the user and a representation of a floor region associated with the representation of the user from the image based at least in part on the first image mask and the metadata, the representation of the user comprised of a first subset of portions of the one or more portions included in the image and the representation of the floor region comprised of a second subset of portions of the one or more portions included in the image; removing, by the computer system, a first area of the image with respect to the extracted representation of the user in the image based at least in part on the first image mask and the metadata thereby generating a second area of the image; combining, by the computer system, the extracted representation of the user with the floor region of the image with respect to the second area of the image based at least in part on the first image mask and the metadata; and displaying, by the computer system, a revised image of the user via a user interface of the computer system, the revised image of the user comprising the combination of the extracted representation of the user with the floor region of the image contained within the second area of the image. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A computer-implemented method, comprising:
-
receiving, by a computer system and from an image capture device, a first image mask that comprises a two-dimensional (2D) representation of a user in an image captured by the image capture device and first metadata that identifies a first subset of regions in the image as being in a foreground of the image; receiving, by the computer system and from the image capture device, a second image mask that comprises a representation of a floor region associated with the user in the image captured by the image capture device and second metadata that identifies a second subset of regions in the image as the foreground of the image; receiving, by the computer system and from the image capture device, a color image of the user; extracting, by the computer system, the representation of the user and the floor region associated with the user, from the color image of the user, based at least in part on the first image mask, the second image mask, the first metadata, and the second metadata; cropping, by the computer system, a first area of the color image of the user with respect to the extracted representation of the user and the floor region associated with the user based at least in part on the first image mask and the second image mask thereby generating a second area of the color image; and obscuring, by the computer system, a third area of the cropped color image based at least in part on the first metadata and the second metadata thereby generating a revised color image of the user that comprises a combination of the extracted representation of the user and the floor region associated with the user. - View Dependent Claims (8, 9, 10, 11, 12, 13)
-
-
14. A computer system, comprising:
-
memory that stores computer-executable instructions; a first sensor configured to capture a three-dimensional (3D) image of an object; a second sensor configured to capture a color image of the object; and at least one processor configured to access the memory and execute the compute-executable instructions to collectively at least; obtain a first image mask that comprises a two-dimensional (2D) representation of a user in an image captured by the first sensor and first metadata that identifies a first subset of regions in the image as being in a foreground of the image based at least in part on a 3D image of the image captured by the first sensor; obtain a second image mask that comprises a representation of a floor region associated with the user in the image captured by the first sensor and second metadata that identifies a second subset of regions in the image as the foreground of the image; obtain the color image of the user from the second sensor; extract the representation of the user and the floor region associated with the user, from the color image of the user, based at least in part on the first image mask, the second image mask, the first metadata, and the second metadata; and remove a first area of the color image of the user with respect to the extracted representation of the user and the floor region associated with the user based at least in part on the first image mask and the second image mask thereby generating a second area of the color image. - View Dependent Claims (15, 16, 17, 18, 19, 20)
-
Specification