System for configuring collective emotional architecture of individual and methods thereof
First Claim
Patent Images
1. A system for configuring collective emotional architecture of an individual, said system comprising:
- a. an input module, said input module is adapted to receive voice input and orientation reference selected from a group consisting of;
date, time, and location corresponding to said voice input, and any combination thereof;
b. a personal collective emotionbase;
said emotionbase comprising benchmark tones and benchmark emotional attitudes (BEA), each of said benchmark tones corresponds to a specific BEA;
c. at least one processor in communication with a computer readable medium (CRM), said processor executes a set of operations received from said CRM, said set of operations comprising steps of;
i. obtaining a signal representing sound volume as a function of frequency from said voice input;
ii. processing said signal so as to obtain voice characteristics of said individual, said processing includes determining a Function A;
said Function A being defined as the average or maximum sound volume as a function of sound frequency, from within a range of frequencies measured in said voice input;
said processing further includes determining a Function B;
said Function B defined as the averaging, or maximizing of said function A over said range of frequencies and dyadic multiples thereof; and
iii. comparing said voice characteristics to said benchmark tones;
iv. assigning to said voice characteristics at least one of said BEAs corresponding to said benchmark tones;
wherein said set of operations additionally comprises steps ofv. assigning said orientation reference to said assigned at least one of said BEAs; and
vi. archiving said assigned at least one orientation reference and said assigned at least one BEA to said emotionbase.
1 Assignment
0 Petitions
Accused Products
Abstract
The present invention provides a system and method for configuring collective emotional architecture of an individual. The system comprising an input module, adapted to receive voice input and orientation reference selected from a group consisting of: date, time, location, and any combination thereof; a personal collective emotionbase, the emotionbase comprising benchmark tones and benchmark emotional attitudes (BEA) while each of the benchmark tones corresponds to a specific BEA and at least one processor in communication with a computer readable medium (CRM). The processor executes a set of operations received from the CRM.
-
Citations
38 Claims
-
1. A system for configuring collective emotional architecture of an individual, said system comprising:
-
a. an input module, said input module is adapted to receive voice input and orientation reference selected from a group consisting of;
date, time, and location corresponding to said voice input, and any combination thereof;b. a personal collective emotionbase;
said emotionbase comprising benchmark tones and benchmark emotional attitudes (BEA), each of said benchmark tones corresponds to a specific BEA;c. at least one processor in communication with a computer readable medium (CRM), said processor executes a set of operations received from said CRM, said set of operations comprising steps of; i. obtaining a signal representing sound volume as a function of frequency from said voice input; ii. processing said signal so as to obtain voice characteristics of said individual, said processing includes determining a Function A;
said Function A being defined as the average or maximum sound volume as a function of sound frequency, from within a range of frequencies measured in said voice input;
said processing further includes determining a Function B;
said Function B defined as the averaging, or maximizing of said function A over said range of frequencies and dyadic multiples thereof; andiii. comparing said voice characteristics to said benchmark tones; iv. assigning to said voice characteristics at least one of said BEAs corresponding to said benchmark tones; wherein said set of operations additionally comprises steps of v. assigning said orientation reference to said assigned at least one of said BEAs; and vi. archiving said assigned at least one orientation reference and said assigned at least one BEA to said emotionbase. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18)
-
-
19. A method for configuring a collective emotional architecture of an individual, said method comprising steps of:
-
a. receiving voice input and an orientation reference selected from a group consisting of date, time, and location corresponding to said voice input, and any combination thereof; b. obtaining an emotionbase;
said emotionbase comprising benchmark tones and benchmark emotional attitudes (BEA), each of said benchmark tones corresponds to a specific BEA;c. a executing a set of operations received from a computer readable medium (CRM), by at least one processor in communication with said CRM;
said set of operations comprises;i. obtaining a signal representing sound volume as a function of frequency from said voice input; ii. processing said signal so as to obtain voice characteristics of said individual, said processing includes determining a Function A;
said Function A being defined as the average or maximum sound volume as a function of sound frequency, from within a range of frequencies measured in said voice input;
said processing further includes determining a Function B;
said Function B defined as the averaging, or maximizing of said function A over said range of frequencies and dyadic multiples thereof;iii. comparing said voice characteristics to said benchmark tones; and iv. assigning to said voice characteristics at least one of said BEAs corresponding to said benchmark tones; wherein said method additionally comprises steps of v. assigning said orientation reference to said assigned at least one of said BEAs; and vi. archiving said assigned at least one orientation reference and said assigned at least one BEA to said emotionbase. - View Dependent Claims (20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36)
-
-
37. A system for configuring collective emotional architecture of an individual, said system comprising:
-
a. an input module, said input module is adapted to receive voice input and orientation reference selected from a group consisting of;
date, time, and location corresponding to said voice input, and any combination thereof;b. a personal collective emotionbase;
said emotionbase comprising benchmark tones and benchmark emotional attitudes (BEA), each of said benchmark tones corresponds to a specific BEA;c. at least one processor in communication with a computer readable medium (CRM), said processor executes a set of operations received from said CRM, said set of operations comprising steps of; i. obtaining a signal representing sound volume as a function of frequency from said voice input; ii. processing said signal so as to obtain voice characteristics of said individual, said processing includes determining a Function A;
said Function A being defined as the average or maximum sound volume as a function of sound frequency, from within a range of frequencies measured in said voice input;
said processing further includes determining a Function B;
said Function B defined as the averaging, or maximizing of said function A over said range of frequencies and dyadic multiples thereof; andiii. comparing said voice characteristics to said benchmark tones; iv. assigning to said voice characteristics at least one of said BEAs corresponding to said benchmark tones; wherein said set of operations additionally comprises a steps of assigning said orientation reference to said assigned at least one of said BEAs; and wherein said system additionally comprises an output module;
said output module is adapted to provide said individual a feedback regarding at least one selected from a group consisting of;
his emotional attitude, suggestions how to change his emotional attitude, and suggestions how to avoid a specific emotional attitude.
-
-
38. A method for configuring a collective emotional architecture of an individual, said method comprising steps of:
-
a. receiving voice input and an orientation reference selected from a group consisting of date, time, and location corresponding to said voice input, and any combination thereof; b. obtaining an emotionbase;
said emotionbase comprising benchmark tones and benchmark emotional attitudes (BEA), each of said benchmark tones corresponds to a specific BEA;c. executing a set of operations received from a computer readable medium (CRM), by at least one processor in communication with said CRM;
said set of operations comprises;i. obtaining a signal representing sound volume as a function of frequency from said voice input; ii. processing said signal so as to obtain voice characteristics of said individual, said processing includes determining a Function A;
said Function A being defined as the average or maximum sound volume as a function of sound frequency, from within a range of frequencies measured in said voice input;
said processing further includes determining a Function B;
said Function B defined as the averaging, or maximizing of said function A over said range of frequencies and dyadic multiples thereof;iii. comparing said voice characteristics to said benchmark tones; and iv. assigning to said voice characteristics at least one of said BEAs corresponding to said benchmark tones; wherein said method additionally comprises a steps of v. assigning said orientation reference to said assigned at least one of said BEAs; and vi. providing said individual a feedback regarding at least one selected from a group consisting of;
one'"'"'s emotional attitude, suggestions how to change one'"'"'s emotional attitude, and suggestions how to avoid a specific emotional attitude.
-
Specification