System and method for processing an audio and video input in a point of view program for haptic delivery
First Claim
1. A system for processing an audio and video input in a point of view program for haptic delivery, said system comprising:
- at least one modular haptic tower capable of air displacement with at least one of variable air flow and latency control;
a processor;
a memory element coupled to the processor;
a haptic engine comprising;
an audio and video (a/v) buffer recognition block;
a program executable by the haptic engine and configured to;
recognize at least one of the audio and video input from a virtual environment comprising the user by the a/v buffer recognition block, and determine for at least one tagged event;
a pixel color score of the tagged event by capturing a screen buffer, wherein the pixel color scored is determined based on at least a calculated average hue score of the tagged event using pixel data in the screen buffer, a calculated average of red and blue channels in the screen buffer, and a calculated average luminescence in the screen buffer; and
convert the at least one pixel color scored tagged event into a haptic output command in response to the pixel color score being above a threshold, and based on the haptic output command, control at least one of an intensity of a motor coupled to a fan assembly and a brake coupled to the motor enabling latency control resulting in the air displacement corresponding to the virtual environment comprising the user.
0 Assignments
0 Petitions
Accused Products
Abstract
The present embodiments disclose apparatus, systems and methods for allowing users to receive targeted delivery of haptic effects—air flow of variable intensity and temperature—from a single tower or surround tower configuration. The haptic tower may have an enclosed, modular assembly that manipulates air flow, fluid flow, scent, or any other haptic or sensation, for an immersed user. Moreover, the system has an application of sensor technology to capture data regarding a user'"'"'s body positioning and orientation in the real environment. This data and, or data from a program coupled to the system, and, or audio-video data corresponding to a user in a virtual environment, is relayed to a haptic engine; recognized; scored along a plurality of parameters; and converted into a haptic output command for haptic output expression corresponding to the user in the virtual environment.
27 Citations
13 Claims
-
1. A system for processing an audio and video input in a point of view program for haptic delivery, said system comprising:
at least one modular haptic tower capable of air displacement with at least one of variable air flow and latency control;
a processor;
a memory element coupled to the processor;
a haptic engine comprising;
an audio and video (a/v) buffer recognition block;
a program executable by the haptic engine and configured to;
recognize at least one of the audio and video input from a virtual environment comprising the user by the a/v buffer recognition block, and determine for at least one tagged event;
a pixel color score of the tagged event by capturing a screen buffer, wherein the pixel color scored is determined based on at least a calculated average hue score of the tagged event using pixel data in the screen buffer, a calculated average of red and blue channels in the screen buffer, and a calculated average luminescence in the screen buffer; and
convert the at least one pixel color scored tagged event into a haptic output command in response to the pixel color score being above a threshold, and based on the haptic output command, control at least one of an intensity of a motor coupled to a fan assembly and a brake coupled to the motor enabling latency control resulting in the air displacement corresponding to the virtual environment comprising the user.- View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13)
Specification