Robotic texture
First Claim
Patent Images
1. A method for generating, by mobile entities, a dynamic texture for display, comprising:
- receiving an input dynamic texture comprising a plurality of frames of a dynamic visual image;
receiving actual positions of the mobile entities;
computing texture characterization parameters for a time step based on the input dynamic texture, wherein the texture characterization parameters comprise color changes for the mobile entities, the color changes causing the mobile entities to display a visual representation of at least one frame of the dynamic visual image;
computing goal positions for the mobile entities to generate the visual representation, wherein the goal positions are based on the texture characterization parameters and the actual positions of the mobile entities; and
providing each one of the goal positions to a physical controller of the mobile entities.
2 Assignments
0 Petitions
Accused Products
Abstract
Techniques are disclosed for controlling robot pixels to display a visual representation of a real-world video texture. Mobile robots with controllable color may generate visual representations of the real-world video texture to create an effect like fire, sunlight on water, leaves fluttering in sunlight, a wheat field swaying in the wind, crowd flow in a busy city, and clouds in the sky. The robot pixels function as a display device for a given allocation of robot pixels. Techniques are also disclosed for distributed collision avoidance among multiple non-holonomic and holonomic robots to guarantee smooth and collision-free motions.
9 Citations
20 Claims
-
1. A method for generating, by mobile entities, a dynamic texture for display, comprising:
-
receiving an input dynamic texture comprising a plurality of frames of a dynamic visual image; receiving actual positions of the mobile entities; computing texture characterization parameters for a time step based on the input dynamic texture, wherein the texture characterization parameters comprise color changes for the mobile entities, the color changes causing the mobile entities to display a visual representation of at least one frame of the dynamic visual image; computing goal positions for the mobile entities to generate the visual representation, wherein the goal positions are based on the texture characterization parameters and the actual positions of the mobile entities; and providing each one of the goal positions to a physical controller of the mobile entities. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A non-transitory computer-readable storage medium including instructions that, when executed by a processor, cause the processor to generate a dynamic texture for display by mobile entities, by performing the steps of:
-
receiving an input dynamic texture comprising a plurality of frames of a dynamic visual image; receiving actual positions of the mobile entities; computing texture characterization parameters for a time step based on the input dynamic texture, wherein the texture characterization parameters comprises color changes for the mobile entities, the color changes causing the mobile entities to display a visual representation of at least one frame of the dynamic visual image; computing goal positions for the mobile entities to generate the visual representation, wherein the goal positions are based on the texture characterization parameters and the actual positions of the mobile entities; and providing each one of the goal positions to a physical controller of the mobile entities.
-
-
13. A system for generating a dynamic texture for display by mobile entities, comprising:
-
a memory that is configured to store instructions for a program; and a processor that is configured to execute the instructions for the program to generate the dynamic texture by performing an operation, the operation comprising; receiving an input dynamic texture comprising a plurality of frames of a dynamic visual image; receiving actual positions of the mobile entities, wherein the texture characterization parameters comprise color changes for the mobile entities, the color changes causing the mobile entities to display a visual representation of at least one frame of the dynamic visual image; computing texture characterization parameters for a time step based on the input dynamic texture; computing goal positions for the mobile entities to generate the visual representation, wherein the goal positions are based on the texture characterization parameters and the actual positions of the mobile entities; and providing each one of the goal positions to a physical controller of the mobile entities. - View Dependent Claims (14, 15, 16, 17, 18, 19, 20)
-
Specification