3D body modeling, from a single or multiple 3D cameras, in the presence of motion
First Claim
1. A method performed by a computer system comprising processor electronics and at least one memory device, the method comprising:
- capturing three dimensional (3D) point clouds using a 3D camera, each of the 3D point clouds corresponding to a different relative position of the 3D camera with respect to a body;
setting one of the 3D point clouds as a reference point cloud;
determining transforms for coordinates of the captured 3D point clouds, other than the reference point cloud, to transform to coordinates of the reference point cloud;
segmenting the body represented in the reference point cloud into body parts corresponding to elements of a 3D part-based volumetric model comprising cylindrical representations; and
generating a segmented representation of the physical object of interest in accordance with the 3D part-based volumetric model, wherein the generating comprises, for each of the captured 3D point clouds other than the reference point cloud,transforming the captured 3D point cloud using its transform,segmenting the body represented in the transformed 3D point cloud using the body parts corresponding to the elements of the 3D part-based volumetric model, anddetermining local motion, for each of the body parts corresponding to the elements of the 3D part-based volumetric model, between the transformed 3D point cloud and the reference point cloud;
wherein a junction between at least two of the cylindrical representations, which are processed as unwrapped cylindrical maps, is handled by setting a first of the cylindrical maps as a reference map, transforming points of a second of the cylindrical maps to the reference map, and blending the transformed points of the second of the cylindrical maps with points of the first of the cylindrical maps to smooth the junction.
1 Assignment
0 Petitions
Accused Products
Abstract
The present disclosure describes systems and techniques relating to generating three dimensional (3D) models from range sensor data. According to an aspect, 3D point clouds are captured using a 3D camera, where each of the 3D point clouds corresponds to a different relative position of the 3D camera with respect to a body. One of the 3D point clouds can be set as a reference point cloud, and transforms can be determined for coordinates of the other captured 3D point clouds to transform these to coordinates of the reference point cloud. The body represented in the reference point cloud can be segmented into body parts corresponding to elements of a 3D part-based volumetric model including cylindrical representations, and a segmented representation of the physical object of interest can be generated in accordance with the 3D part-based volumetric model, while taking localized articulated motion into account.
224 Citations
16 Claims
-
1. A method performed by a computer system comprising processor electronics and at least one memory device, the method comprising:
-
capturing three dimensional (3D) point clouds using a 3D camera, each of the 3D point clouds corresponding to a different relative position of the 3D camera with respect to a body; setting one of the 3D point clouds as a reference point cloud; determining transforms for coordinates of the captured 3D point clouds, other than the reference point cloud, to transform to coordinates of the reference point cloud; segmenting the body represented in the reference point cloud into body parts corresponding to elements of a 3D part-based volumetric model comprising cylindrical representations; and generating a segmented representation of the physical object of interest in accordance with the 3D part-based volumetric model, wherein the generating comprises, for each of the captured 3D point clouds other than the reference point cloud, transforming the captured 3D point cloud using its transform, segmenting the body represented in the transformed 3D point cloud using the body parts corresponding to the elements of the 3D part-based volumetric model, and determining local motion, for each of the body parts corresponding to the elements of the 3D part-based volumetric model, between the transformed 3D point cloud and the reference point cloud; wherein a junction between at least two of the cylindrical representations, which are processed as unwrapped cylindrical maps, is handled by setting a first of the cylindrical maps as a reference map, transforming points of a second of the cylindrical maps to the reference map, and blending the transformed points of the second of the cylindrical maps with points of the first of the cylindrical maps to smooth the junction. - View Dependent Claims (2, 3, 4, 5)
-
-
6. A non-transitory computer-readable medium encoding a program that causes data processing apparatus to perform operations to perform local alignment of limbs identified in three dimensional (3D) point clouds captured using a 3D camera, each of the 3D point clouds corresponding to a different relative position of the 3D camera with respect to a body having the limbs, to model the body in the data processing apparatus;
- wherein the operations comprise;
capturing the 3D point clouds using the 3D camera; setting one of the 3D point clouds as a reference point cloud; determining transforms for coordinates of the captured 3D point clouds, other than the reference point cloud, to transform to coordinates of the reference point cloud; segmenting the body represented in the reference point cloud into body parts, including the limbs, corresponding to elements of a 3D part-based volumetric model comprising cylindrical representations; and generating a segmented representation of the physical object of interest in accordance with the 3D part-based volumetric model, wherein the generating comprises, for each of the captured 3D point clouds other than the reference point cloud, transforming the captured 3D point cloud using its transform, segmenting the body represented in the transformed 3D point cloud using the body parts corresponding to the elements of the 3D part-based volumetric model, and determining local motion, for each of the body parts corresponding to the elements of the 3D part-based volumetric model, between the transformed 3D point cloud and the reference point cloud; and wherein a junction between at least two of the cylindrical representations, which are processed as unwrapped cylindrical maps, is handled by; setting a first of the cylindrical maps as a reference map, transforming points of a second of the cylindrical maps to the reference map, and blending the transformed points of the second of the cylindrical maps with points of the first of the cylindrical maps to smooth the junction. - View Dependent Claims (7, 8, 9, 10)
- wherein the operations comprise;
-
11. A system comprising:
-
processor electronics; and computer-readable media configured and arranged to cause the processor electronics to perform operations comprising;
capturing three dimensional (3D) point clouds using a 3D camera, each of the 3D point clouds corresponding to a different relative position of the 3D camera with respect to a body;
setting one of the 3D point clouds as a reference point cloud;
determining transforms for coordinates of the captured 3D point clouds other than the reference point cloud to transform to coordinates of the reference point cloud;
segmenting the body represented in the reference point cloud into body parts corresponding to elements of a 3D part-based volumetric model comprising cylindrical representations; and
generating a segmented representation of the physical object of interest in accordance with the 3D part-based volumetric model, wherein the generating comprises, for each of the captured 3D point clouds other than the reference point cloud, transforming the captured 3D point cloud using its transform, segmenting the body represented in the transformed 3D point cloud using the body parts corresponding to the elements of the 3D part-based volumetric model, and determining local motion, for each of the body parts corresponding to the elements of the 3D part-based volumetric model, between the transformed 3D point cloud and the reference point cloud; and
wherein a junction between at least two of the cylindrical representations, which are processed as unwrapped cylindrical maps, is handled by;setting a first of the cylindrical maps as a reference map, transforming points of a second of the cylindrical maps to the reference map, and blending the transformed points of the second of the cylindrical maps with points of the first of the cylindrical maps to smooth the junction. - View Dependent Claims (12, 13, 14, 15, 16)
-
Specification