Recognizing gestures from forearm EMG signals
First Claim
1. One or more computer readable memories storing information to enable a computing device to perform a process, the process comprising:
- receiving a plurality of electromyography (EMG) signals derived from respective EMG sensors, the sensors being arranged in a wearable device placed arbitrarily on the forearm;
dividing the EMG signals into a sequence of signal samples, each signal sample comprising signal segments of the EMG signals of the respective EMG sensors;
for each signal sample, forming a corresponding feature vector by extracting from the signal sample a plurality of values of different types of features based on the signal segments of the signal sample;
wherein feature vectors include two or more of amplitudes of individual channels, ratios of amplitudes for channel pairs, coherence ratios for pairs of channels, and frequency energy broken down over subbands;
passing the feature vectors to a machine learning module previously trained during a training session with feature vectors labeled with known gestures to determine gesture classifications of the respective feature vectors for each signal sample;
wherein the training session includes instructing a user to perform a gesture with different arm positions, including, one or more of arm bent, arm extended, palm up, and palm down; and
selecting a single one of the gesture classifications of the respective feature vectors and outputting a particular type of finger movement corresponding to the selected gesture classification.
2 Assignments
0 Petitions
Accused Products
Abstract
A machine learning model is trained by instructing a user to perform proscribed gestures, sampling signals from EMG sensors arranged arbitrarily on the user'"'"'s forearm with respect to locations of muscles in the forearm, extracting feature samples from the sampled signals, labeling the feature samples according to the corresponding gestures instructed to be performed, and training the machine learning model with the labeled feature samples. Subsequently, gestures may be recognized using the trained machine learning model by sampling signals from the EMG sensors, extracting from the signals unlabeled feature samples of a same type as those extracted during the training, passing the unlabeled feature samples to the machine learning model, and outputting from the machine learning model indicia of a gesture classified by the machine learning model.
-
Citations
20 Claims
-
1. One or more computer readable memories storing information to enable a computing device to perform a process, the process comprising:
-
receiving a plurality of electromyography (EMG) signals derived from respective EMG sensors, the sensors being arranged in a wearable device placed arbitrarily on the forearm; dividing the EMG signals into a sequence of signal samples, each signal sample comprising signal segments of the EMG signals of the respective EMG sensors; for each signal sample, forming a corresponding feature vector by extracting from the signal sample a plurality of values of different types of features based on the signal segments of the signal sample; wherein feature vectors include two or more of amplitudes of individual channels, ratios of amplitudes for channel pairs, coherence ratios for pairs of channels, and frequency energy broken down over subbands; passing the feature vectors to a machine learning module previously trained during a training session with feature vectors labeled with known gestures to determine gesture classifications of the respective feature vectors for each signal sample; wherein the training session includes instructing a user to perform a gesture with different arm positions, including, one or more of arm bent, arm extended, palm up, and palm down; and selecting a single one of the gesture classifications of the respective feature vectors and outputting a particular type of finger movement corresponding to the selected gesture classification. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A computer-implemented process for determining user gestures, comprising using a computing device to perform process actions for:
-
receiving a plurality of electromyography (EMG) signals derived from respective EMG sensors, the sensors being arranged in a wearable device placed arbitrarily on the forearm; dividing the EMG signals into a sequence of signal samples, each signal sample comprising signal segments of the EMG signals of the respective EMG sensors; for each signal sample forming a corresponding feature vector by extracting from the signal sample a plurality of values of different types of features based on the signal segments of the signal sample; wherein feature vectors include two or more of amplitudes of individual channels, ratios of amplitudes for channel pairs, coherence ratios for pairs of channels, and frequency energy broken down over subbands; passing the feature vectors to a machine learning module previously trained during a training session with feature vectors labeled with known gestures to determine gesture classifications of the respective feature vectors for each signal sample; wherein the training session includes instructing a user to perform a gesture with different arm positions, including, one or more of arm bent, arm extended, palm up, and palm down; and selecting a single one of the gesture classifications of the respective feature vectors and outputting a particular type of finger movement corresponding to the selected gesture classification. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A device for determining user gestures based on electromyography (EMG) signals derived from EMG sensors, comprising:
-
a processor for interacting with one or more modules; a feature extraction module configured to; receive a plurality of EMG signals derived from a plurality of EMG sensors arranged in a wearable device placed arbitrarily on the forearm, divide the EMG signals into a sequence of signal samples, each signal sample comprising signal segments of the EMG signals of the EMG sensors, form a corresponding feature vector for each signal sample by extracting from the signal sample a plurality of values of different types of features based on the signal segments of the signal sample, and wherein feature vectors include two or more of amplitudes of individual channels, ratios of amplitudes for channel pairs, coherence ratios for pairs of channels, and frequency energy broken down over subbands; a machine learning module configured to receive feature vectors, the machine learning module having been previously trained during a training session with feature vectors labeled with known gestures to determine gesture classifications of the respective feature vectors for each signal sample; wherein the training session includes instructing a user to perform one or more gestures with different arm positions, including, one or more of arm bent, arm extended, palm up, and palm down; and a gesture analysis module configured to select a single one of the gesture classifications of the respective feature vectors and outputting a particular type of finger movement corresponding to the selected gesture classification. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification