Virtual camera control using motion control systems for augmented three dimensional reality
First Claim
1. A method for integrating a virtual rendering system and a motion control system to output a composite three-dimensional render to a three-dimensional display, the method comprising:
- obtaining, from the motion control system, a first three-dimensional camera configuration, wherein the first three-dimensional camera configuration includes camera position data, camera field of view orientation data, camera movement data and camera lens characteristics data;
programming the virtual rendering system using the first three-dimensional camera configuration to correspondingly control a first virtual three-dimensional camera in a virtual environment, wherein the virtual environment includes virtual objects without corresponding real objects in a real environment;
obtaining, from the virtual rendering system, a first virtually rendered three-dimensional feed of the virtual environment and the virtual objects using the first virtual three-dimensional camera;
capturing, from a video capture system, a first video capture three-dimensional feed of the real environment using the first three-dimensional camera configuration;
rendering the composite three-dimensional render by processing the first virtually rendered three-dimensional feed and the first video capture three-dimensional feed; and
outputting the composite three-dimensional render to the three-dimensional display;
wherein outputting the composite three-dimensional render includes simulating a virtual person in one or more scenarios in the real environment having real elements for strategy analysis, and providing a camera fly-by view for each of the one or more scenarios, wherein the virtual person corresponds to a real person in the real environment.
1 Assignment
0 Petitions
Accused Products
Abstract
There is provided a system and method for integrating a virtual rendering system and a motion control system to provide an augmented three-dimensional reality. There is provided a method for integrating a virtual rendering system and a motion control system for outputting a composite three-dimensional render to a three-dimensional display, the method comprising obtaining, from the motion control system, a robotic three-dimensional camera configuration of a robotic three-dimensional camera in a real environment, programming the virtual rendering system using the robotic three-dimensional camera configuration to correspondingly control a virtual three-dimensional camera in a virtual environment, obtaining a virtually rendered three-dimensional feed using the virtual three-dimensional camera, capturing a video capture three-dimensional feed using the robotic three-dimensional camera, rendering the composite three-dimensional render by processing the feeds, and outputting the composite three-dimensional render to the three-dimensional display.
-
Citations
18 Claims
-
1. A method for integrating a virtual rendering system and a motion control system to output a composite three-dimensional render to a three-dimensional display, the method comprising:
-
obtaining, from the motion control system, a first three-dimensional camera configuration, wherein the first three-dimensional camera configuration includes camera position data, camera field of view orientation data, camera movement data and camera lens characteristics data; programming the virtual rendering system using the first three-dimensional camera configuration to correspondingly control a first virtual three-dimensional camera in a virtual environment, wherein the virtual environment includes virtual objects without corresponding real objects in a real environment; obtaining, from the virtual rendering system, a first virtually rendered three-dimensional feed of the virtual environment and the virtual objects using the first virtual three-dimensional camera; capturing, from a video capture system, a first video capture three-dimensional feed of the real environment using the first three-dimensional camera configuration; rendering the composite three-dimensional render by processing the first virtually rendered three-dimensional feed and the first video capture three-dimensional feed; and outputting the composite three-dimensional render to the three-dimensional display; wherein outputting the composite three-dimensional render includes simulating a virtual person in one or more scenarios in the real environment having real elements for strategy analysis, and providing a camera fly-by view for each of the one or more scenarios, wherein the virtual person corresponds to a real person in the real environment. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A rendering controller for outputting a composite three-dimensional render to a three-dimensional display, the rendering controller comprising:
a processor configured to; obtain, from a motion control system, a first three-dimensional camera configuration, wherein the first three-dimensional camera configuration includes camera position data, camera field of view orientation data, camera movement data and camera lens characteristics data; program a virtual rendering system using the first three-dimensional camera configuration to correspondingly control a first virtual three-dimensional camera in a virtual environment, wherein the virtual environment includes virtual objects without corresponding real objects in a real environment; obtain, from the virtual rendering system, a first virtually rendered three-dimensional feed of the virtual environment and the virtual objects using the first virtual three-dimensional camera; capture, from a video capture system, a first video capture three-dimensional feed of the real environment using the first three-dimensional camera configuration; render the composite three-dimensional render by processing the first virtually rendered three-dimensional feed and the first video capture three-dimensional feed; and output the composite three-dimensional render to the three-dimensional display; wherein outputting the composite three-dimensional render includes simulating a virtual person in one or more scenarios in the real environment having real elements for strategy analysis, and providing a camera fly-by view for each of the one or more scenarios, wherein the virtual person corresponds to a real person in the real environment. - View Dependent Claims (11, 12, 13, 14, 15, 16, 17, 18)
Specification