Health monitoring system and appliance
First Claim
1. An electronic device configured to process audible expressions from users, comprising:
- a network interface;
a haptic engine configured to provide kinesthetic communication;
at least one computing device; and
computer readable memory including instructions operable to be executed by the at least one computing device to perform a set of actions, configuring the at least one computing device to;
receive in real time, over a network via the network interface, a digitized human vocal expression of a first user and one or more digital images from a remote device;
process the received digitized human vocal expression using digital signal processing to convert the digitized audible expression from a time domain to a frequency domain, and to perform at least one of dimensionality reduction or warping of two or more frequencies to a first scale thereby reducing an amount of vocal expression data that needs to be processed;
use the processed digitized human vocal expression to determine characteristics of the human vocal expression, including;
determine, using a pitch analysis module, a pitch of the human vocal expression,determine, using a volume analysis module a volume of the human vocal expression,determine, using a rapidity analysis module how rapidly the first user is speaking in the human vocal expression,determine, using a vocal tract analysis module, a magnitude spectrum of the human vocal expression, andidentify, using a non-speech analysis module, pauses and the length of pauses in speech in the human vocal expression;
use a natural language module to convert audible speech in the human vocal expression to text and to understand audible speech in the human vocal expression;
compare the determined characteristics of the human vocal expression with baseline, historical characteristics of human vocal expressions associated with the first user to identify changes in human vocal expression characteristics of the first user;
process the received one or more images to detect characteristics of the first user face, including detecting if one or more of the following are present;
a sagging lip, a crooked smile, uneven eyebrows, facial droop;
compare the detected characteristics of the first user face with baseline, historical characteristics of the first user face accessed from a data store, and identify changes in characteristics of the first user face;
weight, using a first weight, a first identified change with respect to a first vocal expression characteristic of the first user;
weight, using a second weight, a second identified change with respect to a second vocal expression characteristic of the first user;
weight, using a third weight, a third identified change with respect to a first characteristic of the first user face;
weight, using a fourth weight, a fourth identified change with respect to a second characteristic of the first user face;
inferring a change in health status of the first user based at least in part on the weighted first identified change with respect to the first vocal expression characteristic of the first user, the weighted second identified change with respect to the second vocal expression characteristic of the first user, the weighted third identified change with respect to the first characteristic of the first user face, the weighted fourth identified change with respect to the second characteristic of the first user face;
based at least in part on the inferred change in health status of the first user determine if a vehicle is to be deployed to the first user; and
at least partly in response to a determination that a vehicle is to be deployed to the first user, enable a vehicle to be deployed to a location of the first user.
1 Assignment
0 Petitions
Accused Products
Abstract
Systems and methods are disclosed. A digitized human vocal expression of a user and digital images are received over a network from a remote device. The digitized human vocal expression is processed to determine characteristics of the human vocal expression, including: pitch, volume, rapidity, a magnitude spectrum identify, and/or pauses in speech. Digital images are received and processed to detect characteristics of the user face, including detecting if one or more of the following is present: a sagging lip, a crooked smile, uneven eyebrows, and/or facial droop. Based at least on part on the human vocal expression characteristics and face characteristics, a determination is made as to what action is to be taken. A cepstrum pitch may be determined using an inverse Fourier transform of a logarithm of a spectrum of a human vocal expression signal. The volume may be determined using peak heights in a power spectrum of the human vocal expression.
-
Citations
30 Claims
-
1. An electronic device configured to process audible expressions from users, comprising:
-
a network interface; a haptic engine configured to provide kinesthetic communication; at least one computing device; and computer readable memory including instructions operable to be executed by the at least one computing device to perform a set of actions, configuring the at least one computing device to; receive in real time, over a network via the network interface, a digitized human vocal expression of a first user and one or more digital images from a remote device; process the received digitized human vocal expression using digital signal processing to convert the digitized audible expression from a time domain to a frequency domain, and to perform at least one of dimensionality reduction or warping of two or more frequencies to a first scale thereby reducing an amount of vocal expression data that needs to be processed; use the processed digitized human vocal expression to determine characteristics of the human vocal expression, including; determine, using a pitch analysis module, a pitch of the human vocal expression, determine, using a volume analysis module a volume of the human vocal expression, determine, using a rapidity analysis module how rapidly the first user is speaking in the human vocal expression, determine, using a vocal tract analysis module, a magnitude spectrum of the human vocal expression, and identify, using a non-speech analysis module, pauses and the length of pauses in speech in the human vocal expression; use a natural language module to convert audible speech in the human vocal expression to text and to understand audible speech in the human vocal expression; compare the determined characteristics of the human vocal expression with baseline, historical characteristics of human vocal expressions associated with the first user to identify changes in human vocal expression characteristics of the first user; process the received one or more images to detect characteristics of the first user face, including detecting if one or more of the following are present;
a sagging lip, a crooked smile, uneven eyebrows, facial droop;compare the detected characteristics of the first user face with baseline, historical characteristics of the first user face accessed from a data store, and identify changes in characteristics of the first user face; weight, using a first weight, a first identified change with respect to a first vocal expression characteristic of the first user; weight, using a second weight, a second identified change with respect to a second vocal expression characteristic of the first user; weight, using a third weight, a third identified change with respect to a first characteristic of the first user face; weight, using a fourth weight, a fourth identified change with respect to a second characteristic of the first user face; inferring a change in health status of the first user based at least in part on the weighted first identified change with respect to the first vocal expression characteristic of the first user, the weighted second identified change with respect to the second vocal expression characteristic of the first user, the weighted third identified change with respect to the first characteristic of the first user face, the weighted fourth identified change with respect to the second characteristic of the first user face; based at least in part on the inferred change in health status of the first user determine if a vehicle is to be deployed to the first user; and at least partly in response to a determination that a vehicle is to be deployed to the first user, enable a vehicle to be deployed to a location of the first user. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. An electronic device, comprising:
-
a network interface; a haptic engine configured to provide kinesthetic communication; at least one computing device; and computer readable memory including instructions operable to be executed by the at least one computing device to perform a set of actions, configuring the at least one computing device to; receive, over a network via the network interface, a digitized human vocal expression of a first user; convert at least a portion of the digitized human vocal expression to text; process the received digitized human vocal expression using digital signal processing to convert the digitized audible expression from a time domain to a frequency domain, and to perform at least one of dimensionality reduction or warping of two or more frequencies to a first scale; use the processed digitized human vocal expression to determine characteristics of the human vocal expression, including; determine a pitch of the human vocal expression, determine a volume of the human vocal expression, determine how rapidly the first user is speaking in the human vocal expression, determine a magnitude and/or power spectrum of the human vocal expression, determine pauses and the length of pauses in speech in the human vocal expression, and analyze lexicon usage, syntax, semantics, and/or discourse patterns in speech in the human vocal expression; compare the determined characteristics of the human vocal expression with baseline, historical characteristics of human vocal expressions associated with the first user to identify changes in human vocal expression characteristics of the first user; weight, using a first weight, a first identified change with respect to a first vocal expression characteristic of the first user; weight, using a second weight, a second identified change with respect to a second vocal expression characteristic of the first user; inferring a change in health status of the first user based at least in part on the weighted first identified change with respect to the first vocal expression characteristic of the first user, the weighted second identified change with respect to the second vocal expression characteristic of the first user; based at least in part on the inferred change in health status of the first user, determine if a first action is to be taken. - View Dependent Claims (9, 10, 11, 12, 13, 14, 15, 16, 17)
-
-
18. A computer implemented method, comprising:
-
communicating with at least one user using a haptic engine; receiving, at a system configured to process digitized human vocal expressions using a digital signal processing module, a digitized human vocal expression of a first user from a first user device; converting by the system at least a portion of the digitized human vocal expression to text; processing, using the digital signal processing module, the received digitized human vocal expression to convert the digitized audible expression from a time domain to a frequency domain, and to perform at least one of dimensionality reduction or warping of two or more frequencies to a first scale; using, by the system, the processed digitized human vocal expression to determine characteristics of the human vocal expression, including; determining a pitch of the human vocal expression, determining a volume of the human vocal expression, determining how rapidly the first user is speaking in the human vocal expression, determining a magnitude and/or power spectrum of the human vocal expression, determining pauses and the length of pauses in speech in the human vocal expression, and analyzing lexicon usage, syntax, semantics, and/or discourse patterns in speech in the human vocal expression; comparing one or more of the determined characteristics of the human vocal expression with one or more baseline, historical characteristics of human vocal expressions associated with the first user; weighting by the system, using a first weight, a first identified change with respect to a first vocal expression characteristic of the first user; weighting by the system, using a second weight, a second identified change with respect to a second vocal expression characteristic of the first user; inferring, by the system, a change in health status of the first user based at least in part on the weighted first identified change with respect to the first vocal expression characteristic of the first user, the weighted second identified change with respect to the second vocal expression characteristic of the first user; based at least in part on the inferred change in health status of the first user, determining if a first action is to be taken. - View Dependent Claims (19, 20, 21, 22, 23, 24, 25)
-
-
26. A computer implemented method, comprising:
-
communicating with at least one user using a haptic engine; receiving, at a computerized device configured to process digitized human vocal expressions using a digital signal processing module, a digitized human vocal expression of a first user; processing, using the digital signal processing module, the received digitized human vocal expression to convert the digitized audible expression from a time domain to a frequency domain, and to perform at least one of dimensionality reduction or warping of two or more frequencies to a first scale; using, by the system, the processed digitized human vocal expression to determine characteristics of the human vocal expression, including; determining a volume of the human vocal expression, determining how rapidly the first user is speaking in the human vocal expression, generating spectrum analysis of the human vocal expression, determining pauses and the length of pauses in speech in the human vocal expression, and analyzing lexicon usage, syntax, semantics, and/or discourse patterns in speech in the human vocal expression; comparing, using the computerized device, one or more of the determined characteristics of the human vocal expression with one or more baseline, historical characteristics of human vocal expressions associated with the first user; weighting by the system, using a first weight, a first identified change with respect to a first vocal expression characteristic of the first user; weighting by the system, using a second weight, a second identified change with respect to a second vocal expression characteristic of the first user; inferring, by the system, a change in health status of the first user based at least in part on the weighted first identified change with respect to the first vocal expression characteristic of the first user, the weighted second identified change with respect to the second vocal expression characteristic of the first user; based at least in part on the inferred change in health status of the first user, determining if a first action is to be taken. - View Dependent Claims (27, 28, 29, 30)
-
Specification