Audio/video methods and systems
First Claim
1. A method comprising the acts:
- in an originating hardware unit, receiving video information captured during entertainment content production, and transforming the video information for representation in a unitary data object, said transforming including compressing the video information;
in the originating unit, receiving audio information captured during entertainment content production, and transforming the audio information for representation in said unitary data object;
in the originating unit, receiving sensor information captured during entertainment content production, said sensor information comprising at least one of acceleration, orientation or tilt data, said sensor information being different than said video information and being different than said audio information, and transforming the sensor information for representation in said unitary data object; and
providing the unitary data object, comprising said compressed video, audio and sensor information, for rendering by first and second remote end user rendering units;
wherein the sensor information is persistently associated with the audio and compressed video information in a structured fashion by said unitary data object, and is thereby adapted for use by a processor in each of said first and second remote rendering units to selectively alter the audio and/or video information based on the sensor information—
or not, in accordance with end user or device preferences, so that different end users may view the video content with, or without, motion stabilization or motion-related sound effects.
0 Assignments
0 Petitions
Accused Products
Abstract
Audio and or video data is structurally and persistently associated with auxiliary sensor data (e.g., relating to acceleration, orientation or tilt) through use of a unitary data object, such as a modified MPEG file or data stream. In this form, different rendering devices can employ co-conveyed sensor data to alter the audio or video content. Such use of the sensor data may be personalized to different users, e.g., through preference data. For example, accelerometer data can be associated with video data, allowing some users to view a shake-stabilized version of a video, and other users to view the video with such motion artifacts undisturbed. In like fashion, camera parameters, such as focal plane distance, can be co-conveyed with audio/video content—allowing the volume to be diminished (or not, again depending on user preference) when a camera captures audio/video from a distant subject. Some arrangements employ multiple image sensors and/or multiple audio sensors—each also collecting auxiliary data. A great number of other features and arrangements are also detailed.
-
Citations
19 Claims
-
1. A method comprising the acts:
-
in an originating hardware unit, receiving video information captured during entertainment content production, and transforming the video information for representation in a unitary data object, said transforming including compressing the video information; in the originating unit, receiving audio information captured during entertainment content production, and transforming the audio information for representation in said unitary data object; in the originating unit, receiving sensor information captured during entertainment content production, said sensor information comprising at least one of acceleration, orientation or tilt data, said sensor information being different than said video information and being different than said audio information, and transforming the sensor information for representation in said unitary data object; and providing the unitary data object, comprising said compressed video, audio and sensor information, for rendering by first and second remote end user rendering units; wherein the sensor information is persistently associated with the audio and compressed video information in a structured fashion by said unitary data object, and is thereby adapted for use by a processor in each of said first and second remote rendering units to selectively alter the audio and/or video information based on the sensor information—
or not, in accordance with end user or device preferences, so that different end users may view the video content with, or without, motion stabilization or motion-related sound effects. - View Dependent Claims (2, 3, 15, 16, 17, 18, 19)
-
-
4. A method comprising the acts:
-
in an originating hardware unit, receiving video information captured during entertainment content production, and transforming the video information for representation in a unitary data object, said transforming including compressing the video information; in the originating unit, receiving audio information captured during entertainment content production, and transforming the audio information for representation in said unitary data object; in the originating unit, receiving sensor information captured during entertainment content production, said sensor information comprising data indicating focal length of a camera that captured the video information, said sensor information being different than said video information and being different than said audio information; and providing the unitary data object, comprising said compressed video, audio and sensor information, for rendering by first and second remote end user rendering units; wherein the sensor information is persistently associated with the audio and video information in a structured fashion by said unitary data object, and is thereby adapted for use by a processor in each of said first and second remote rendering units to selectively alter the audio information based on the sensor information—
or not, in accordance with end user or device preferences, so that different end users may view the video content with, or without, dimensional audio. - View Dependent Claims (5, 6)
-
- 7. A mobile phone device including a processor, a memory, a screen, an audio output, and a wireless interface, the memory containing software instructions that configure the device to receive a unitary data object via said wireless interface, the unitary data object representing entertainment content comprising compressed video information, audio information, and sensor information, said sensor information comprising data indicating motion of a camera used to capture the video information and/or indicating distance of said camera from a camera subject, said sensor information being different than said video information and being different than said audio information, said software instructions enabling a user of the device to specify whether the entertainment content should be altered in accordance with said sensor information, when the entertainment content is rendered to the user.
Specification