System and method for selection of data according to measurement of physiological parameters
First Claim
1. A method for filtering targeted data comprising steps of:
- a. providing a plurality of M devices Di;
each of said Di is adapted to measure a physiological parameter;
at least one of said physiological parameters being a voice-based physiological parameter selected from voice intonation and voice tone and at least one of said physiological parameters being a non-voice based physiological parameter selected from a group consisting of skin conductivity, rate of heart beat, blood pressure, brain activity, smell, facial expression, eye movement, and body language;
b. providing a data base of plurality of classified data;
said classification is according to said physiological parameters;
wherein said classified data is selected from a group consisting of;
coupons, marketing data, informational data, social data, matching data between individuals in a social network, and any combination thereof;
c. measuring said at least one non-voice based physiological parameters of a mammalian subject using said devices;
d. measuring said voice intonation or voice tone and determining at least one voice-based emotional attitude, as follows;
i. obtaining a database comprising reference tones and voice-based reference emotional attitudes corresponding to each of said reference tones;
ii. pronouncing at least one word by a speaker for the duration of a sample period;
iii. recording said at least one word so as to obtain a signal representing sound volume as a function of frequency for said sample period;
iv. processing said signal so as to obtain voice characteristics of said speaker, wherein said processing includes determining a Function A, said Function A being defined as the average or maximum sound volume as a function of sound frequency, from within a range of frequencies measured in said sampled period, and wherein said processing further includes determining a Function B, said Function B defined as the averaging, or maximizing of said function A over said range of frequencies and dyadic multiples thereof;
v. comparing said voice characteristics to said reference tones so as to indicate at least one of said voice-based reference emotional attitudes;
e. storing results of said measurements and said voice-based emotional attitude in a computer readable medium having instruction thereon;
wherein said method, additionally comprising steps off. deriving, from said at least one voice-based emotional attitude and said at least one non-voice based physiological parameter, an emotional state of said mammalian subject via said instructions; and
g. selecting via said instructions, at least some of said classified data according to said emotional state.
1 Assignment
0 Petitions
Accused Products
Abstract
It is one object of the present invention to disclose a method for filtering targeted data comprising steps of: a. providing a plurality of M devices Di; each of the Di is adapted to measure a physiological parameter; b. providing a data base of plurality of classified data; the classification is according to the physiological parameters; c. measuring a plurality of N physiological parameters of a mammalian subject using the devices; d. storing results of the measurement in a computer readable medium having instruction thereon; wherein the method, additionally comprising step of e. selecting via the instructions, at least some of the classified data according to the result of measurement of physiological parameters.
-
Citations
16 Claims
-
1. A method for filtering targeted data comprising steps of:
-
a. providing a plurality of M devices Di;
each of said Di is adapted to measure a physiological parameter;
at least one of said physiological parameters being a voice-based physiological parameter selected from voice intonation and voice tone and at least one of said physiological parameters being a non-voice based physiological parameter selected from a group consisting of skin conductivity, rate of heart beat, blood pressure, brain activity, smell, facial expression, eye movement, and body language;b. providing a data base of plurality of classified data;
said classification is according to said physiological parameters;
wherein said classified data is selected from a group consisting of;
coupons, marketing data, informational data, social data, matching data between individuals in a social network, and any combination thereof;c. measuring said at least one non-voice based physiological parameters of a mammalian subject using said devices; d. measuring said voice intonation or voice tone and determining at least one voice-based emotional attitude, as follows; i. obtaining a database comprising reference tones and voice-based reference emotional attitudes corresponding to each of said reference tones; ii. pronouncing at least one word by a speaker for the duration of a sample period; iii. recording said at least one word so as to obtain a signal representing sound volume as a function of frequency for said sample period; iv. processing said signal so as to obtain voice characteristics of said speaker, wherein said processing includes determining a Function A, said Function A being defined as the average or maximum sound volume as a function of sound frequency, from within a range of frequencies measured in said sampled period, and wherein said processing further includes determining a Function B, said Function B defined as the averaging, or maximizing of said function A over said range of frequencies and dyadic multiples thereof; v. comparing said voice characteristics to said reference tones so as to indicate at least one of said voice-based reference emotional attitudes; e. storing results of said measurements and said voice-based emotional attitude in a computer readable medium having instruction thereon; wherein said method, additionally comprising steps of f. deriving, from said at least one voice-based emotional attitude and said at least one non-voice based physiological parameter, an emotional state of said mammalian subject via said instructions; and g. selecting via said instructions, at least some of said classified data according to said emotional state. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A system for filtration of targeted data comprising:
-
a. a plurality of M devices Di;
each of said Di is adapted to measure a physiological parameter of mammalian subject;
at least one of said physiological parameters being a voice-based physiological parameter selected from voice intonation and voice tone and at least one of said physiological parameters being a non-voice based physiological parameter selected from a group consisting of skin conductivity, rate of heart beat, blood pressure, brain activity, smell, facial expression, eye movement, and body language;b. a data base of plurality of classified data;
said classification is according to said physiological parameters;
wherein said classified data is selected from a group consisting of;
coupons, marketing data, informational data, social data, matching data between individuals in a social network, and any combination thereof;c. a computer readable medium CRM having instruction thereon for storing results of said measurements;
said CRM is in communication with said M devices, said data base;d. a sub-system for indicating at least one voice-based emotional attitude of a speaker using voice tone analysis, said sub-system comprising; i. a sound recorder adapted to record a word or set of words that is repeatedly pronounced by a speaker for the duration of a sample period, and to produce a signal representing sound volume as a function of frequency for said sample period; ii. processing means coupled to said recorder, for processing said signal so as to obtain voice characteristics relating to the tone of said speaker, wherein said voice characteristics includes a Function A defined as the average or maximum sound volume as a function of sound frequency from within a range of frequencies measured in said sampled period and a Function B defined as the averaging, or maximizing of said function A over said range of frequencies and dyadic multiples thereof; and iii. a database comprising a plurality of reference tones and emotional attitudes corresponding to each of said reference tones for allowing indicating of at least one voice-based emotional attitude of said speaker through comparison of said voice characteristics to said reference tones; wherein said instructions are additionally for e. deriving, from said at least one voice-based emotional attitude and said at least one non-voice based physiological parameter, an emotional state of said mammalian subject; and f. selecting at least some of said classified data according to said emotional state, selecting at least some of said classified data according to said result of measurement of physiological parameters. - View Dependent Claims (10, 11, 12, 13, 14, 15, 16)
-
Specification