AUDIO/VIDEO METHODS AND SYSTEMS
0 Assignments
0 Petitions
Accused Products
Abstract
Audio and or video data is structurally and persistently associated with auxiliary sensor data (e.g., relating to acceleration, orientation or tilt) through use of a unitary data object, such as a modified MPEG file or data stream. In this form, different rendering devices can employ co-conveyed sensor data to alter the audio or video content. Such use of the sensor data may be personalized to different users, e.g., through preference data. For example, accelerometer data can be associated with video data, allowing some users to view a shake-stabilized version of a video, and other users to view the video with such motion artifacts undisturbed. In like fashion, camera parameters, such as focal plane distance, can be co-conveyed with audio/video content—allowing the volume to be diminished (or not, again depending on user preference) when a camera captures audio/video from a distant subject. Some arrangements employ multiple image sensors and/or multiple audio sensors—each also collecting auxiliary data. A great number of other features and arrangements are also detailed.
-
Citations
42 Claims
-
1-33. -33. (canceled)
-
34. A method comprising the acts:
-
in an originating hardware unit, receiving video information captured during entertainment content production, and transforming the video information for representation in a unitary data object; in the originating unit, receiving audio information captured during entertainment content production, and transforming the audio information for representation in said unitary data object; in the originating unit, receiving sensor information captured during entertainment content production, said sensor information comprising at least one parameter relating to acceleration, orientation or tilt, and transforming the sensor information for representation in said unitary data object; and providing the unitary data object, comprising said video, audio and sensor information, for rendering by plural remote end user rendering units; wherein the sensor information is persistently associated with the audio and video information in a structured fashion by said unitary data object, and is thereby adapted for use by a processor in each of said plural remote rendering units to selectively alter the audio and/or video information based on the sensor information—
or not, in accordance with end user or device preferences, so that different end users may view the video content with, or without, motion stabilization or motion-related sound effects. - View Dependent Claims (35, 36)
-
-
37. A method comprising the acts:
-
in an originating hardware unit, receiving video information captured during entertainment content production, and transforming the video information for representation in a unitary data object; in the originating unit, receiving audio information captured during entertainment content production, and transforming the audio information for representation in said unitary data object; in the originating unit, receiving sensor information captured during entertainment content production, said sensor information comprising at least one parameter relating to focal length of a camera that captured the video information; and providing the unitary data object, comprising said video, audio and sensor information, for rendering by plural remote end user rendering units; wherein the sensor information is persistently associated with the audio and video information in a structured fashion by said unitary data object, and is thereby adapted for use by a processor in each of said plural remote rendering units to selectively alter the audio information based on the sensor information—
or not, in accordance with end user or device preferences, so that different end users may view the video content with, or without, dimensional audio. - View Dependent Claims (38, 39)
-
- 40. A mobile phone device including a processor, a memory, a screen, an audio output, and a wireless interface, the memory containing software instructions that configure the device to receive a unitary data object via said wireless interface, the unitary data object representing entertainment content comprising video information, audio information, and sensor information, said sensor information indicating motion of a camera used to capture the video information and/or indicating distance of said camera from a camera subject, said software instructions enabling a user of the device to specify whether the entertainment content should be altered in accordance with said sensor information, when the entertainment content is rendered to the user.
Specification