Assistive clothing
First Claim
Patent Images
1. A user interface comprising:
- a sensor located in the mouth sensitive to at least the locations of its contact with the tongue;
a device to receive data;
a power provision component for providing power to in-mouth components; and
a processing facility, operatively connected to said sensor and to said device to receive data, for at least calculating, from the plurality of essentially simultaneously sensed points of tongue contact with said sensor, a single value representative of the approximate location of the tongue on the surface of the sensor and for providing at least data relative to said value to said device to receive data thus enabling a user to at least communicate a desired movement along a path related to the path of the tongue over the sensor'"'"'s surface.
0 Assignments
0 Petitions
Accused Products
Abstract
An apparatus and process for empowering those in wheelchairs and others with loss of limb control to walk, climb stairs, sit in ordinary chairs, use normal bathroom facilities and stand at normal height with their peers. It also provides a means to speed rehabilitation of the injured and a means to deliver more comfortable and effective prostheses while providing superior physical therapy without draining professional resources which includes new simulation and training means. Most of the apparatus is worn under normal clothing allowing them to enjoy a normal appearance. Additional advancements include improved actuator design, user interface, visual means and advanced responsive virtual reality.
-
Citations
38 Claims
-
1. A user interface comprising:
-
a sensor located in the mouth sensitive to at least the locations of its contact with the tongue; a device to receive data; a power provision component for providing power to in-mouth components; and a processing facility, operatively connected to said sensor and to said device to receive data, for at least calculating, from the plurality of essentially simultaneously sensed points of tongue contact with said sensor, a single value representative of the approximate location of the tongue on the surface of the sensor and for providing at least data relative to said value to said device to receive data thus enabling a user to at least communicate a desired movement along a path related to the path of the tongue over the sensor'"'"'s surface. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A user interface comprising:
-
a sensor located in the mouth whose surface is sensitive to at least the locations of its contact with the tongue; a device to be controlled; a power provision component for providing power to components; and a processing facility operatively connected to said sensor and to said device to be controlled for generating, from said sensor'"'"'s output, at least directional data responsive to the path of the tongue'"'"'s movement on the surface of said sensor and, by at least providing said data to said device to be controlled, directing said device to be controlled analogously as a computer touchpad.
-
-
10. A user input device comprising:
-
a sensor located in the mouth sensitive to locations of tongue contact; a processing component, operatively connected to said sensor and to a device to receive data, for at least identifying from sensor data the approximate location of the tongue on the sensor and communicating, to said device to receive data, at least data that can be used to provide guidance relative to the speed with which the tongue moved on the sensor and the direction of that movement.
-
-
11. A user input device comprising:
-
a sensor in the mouth for sensing at least the approximate location of the tongue on the sensor; a securing facility for securing the position of in-mouth components; a device to be directed; a processing facility, operatively connected to said sensor and having an operative connection with said device to be directed, for calculating and providing to said device to be controlled data responsive to tongue contact and the directions of tongue movement over said sensor based on said sensor'"'"'s data;
whereby devices are provided the kind of data provided by computer touchpads but without requiring the use of hands. - View Dependent Claims (12)
-
-
13. A user interface comprising:
-
a sensor located in the mouth for sensing at least the location of the tongue on the sensor a securing device for securing the user interface'"'"'s in-mouth components; a power provision component for providing power to in-mouth components; a processing component, operatively connected to said sensor and to a device to be controlled, for at least identifying, from the portion of the sensor in contact with the tongue, a point representative of the approximate location of the tongue on the sensor and communicating, to said device to be controlled, at least data relatable to a distance and direction relative to the distance the tongue moves on the sensor and the direction of that movement;
whereby devices can be directed without need of hands and said user interface can be used to replace mice and pointing devices.
-
-
14. A user interface comprising:
-
a sensor located in the mouth for sensing at least the location of the tongue; a securing device for securing the user interface'"'"'s in-mouth components; a processing component, operatively connected to said sensor and to a device to receive data, for at least calculating, from the portion of the sensor in contact with the tongue, a reference point responsive to the approximate location of the tongue on the sensor and communicating, to said device to receive data, at least positional data responsive to the relative position of the tongue on the sensor;
wherebysaid positional data contains data indicative of a selected one of the group comprising a) a point in an area, b) a point on a line and c) both a and b, allowing the hands-free device to be used to replace conventional mice, touchpads, sliders and pointing devices.
-
-
15. A user interface comprising:
-
a plurality of sensors, which may be a plurality of sensitive locations on a single component, located in the mouth sensitive to at least the presence of tongue contact and arrayed in essentially a row and column pattern; a securing component for securing said plurality of sensors in the mouth; a device to be controlled; a processing facility, operatively connected to said plurality of sensors and to said device to be controlled, for determining at least a representative location in an area spatially comparable to at least the approximate location of at least one said sensor incurring tongue contact and providing, to said device to use data, data, including data responsive to said representative location, in order to direct said device to be controlled analogously as a computer touchpad.
-
-
16. A user interface comprising:
-
a sensor assembly located in the mouth having locations of sensitivity to tongue contact whose known spatial arrangement enables the capture of at least relative location data based on where the tongue contacts the sensor assembly; a power provision component for providing power to in-mouth components; at least one device to receive data; and a processing facility, operatively connected to said sensor assembly and to said at least one device to receive data, for at least providing, to said at least one device to receive data, data responsive to said relative location data;
wherebydevices are provided the kind of data provided by computer touchpads but without requiring the use of hands.
-
-
17. A user interface comprising:
-
a sensor located in the mouth for sensing at least the locations of its contact with the tongue; a power provision component for providing power to components; a device to receive data; and a processing facility, operatively connected to said sensor and to said device to receive data, for identifying at least, from the plurality of points of potentially simultaneous tongue contact on said sensor, a single relative spatial location based upon the approximate location of the tongue on the sensor and for providing, to said device to receive data, data including said relative spatial location.
-
-
18. A user interface comprising:
-
a touchpad sensor located in the mouth for sensing at least the locations of tongue contact with said touchpad sensor, a power provision component for providing power to components as needed; a device to receive data; a processing facility, operatively connected to said touchpad sensor and to said device to receive data, for identifying at least a representative value for an approximation of where on the touchpad sensor contact is made, and for providing to said device to receive data, data including data based on said representative value, and directing said device to receive data analogously as a computer touchpad.
-
-
19. A user interface comprising:
-
a sensor assembly located in the mouth sensitive to at least the locations of tongue contact; and a securing element for securing said sensor assembly and other in-mouth components; and a sending component for sending data; a receiving component for receiving data from said sending component; a device to receive data operatively connected to said receiving component; at least one power provision facility for providing power to components; a first processing component for receiving data from said sensor assembly, for identifying from said received data at least a single value indicative of the approximate location of the tongue on said sensor assembly, and for sending processed data that is at least indicative of said approximate location to said device to receive data via the sending element. - View Dependent Claims (20)
-
-
21. A user interface comprising:
-
a sensor assembly with a plurality of sensitive locations located in the mouth sensitive to at least the locations of tongue contact with said sensor assembly; a device to be controlled; and a processing facility, operatively connected to said sensor assembly and to said device to be controlled, for processing signals from said sensor assembly into at least data indicative of a position relative to the portion of said sensor assembly that is in contact with the tongue and providing data, based upon said position, to said device to be controlled for the direction of said device to be controlled analogously as a computer touchpad. - View Dependent Claims (22)
-
-
23. A user interface comprising:
-
a plurality of sensors located in the mouth sensitive to contact with the tongue; a housing for securing in-mouth user interface components; a device to receive data; a power facility for providing power to user interface components; a processing facility, operatively connected to said plurality of sensors and to a device to receive data, for calculating, from the output of said plurality of sensors, at least data related to the area of tongue contact on said plurality of sensors and for providing data, including said data related to the area of tongue contact, to said device to receive data. - View Dependent Claims (24)
-
-
25. A user interface comprising:
-
a plurality of sensors located in the mouth sensitive to contact with the tongue; a housing for securing in-mouth user interface components; a device to be controlled; and a processing facility, operatively connected to said plurality of sensors and to said device to be controlled, for generating and providing to said device to be controlled instructions for said device to be controlled that are already in a format accepted by said device to be controlled with said instructions based on positional data drawn from the at least approximate location of portions of the area populated by the said plurality of sensors that are in contact with the tongue;
wherebya device to be controlled can be directed with actions of the tongue and without necessary additional data conversion.
-
-
26. A user interface comprising:
-
a sensor located in the mouth sensitive to at least location of tongue contact on the sensor; a power facility for providing power to user interface components; a device to receive data; and a processing facility, operatively connected to said sensor and to a device to receive data, for calculating, from the output of said sensor, a value relative to the distance from at least an approximate location for the tongue on said sensor to at least one reference point on said sensor'"'"'s sensitive area and delivering at least data based on said value to said device to receive data enabling a user to at least communicate a spatial reference to said device to receive data based upon contact with the tongue on a user-chosen point on the sensor. - View Dependent Claims (27, 28, 29, 30)
-
-
31. A user interface comprising:
-
a sensor located in the mouth of a user for sensing at least the locations of its contact with the tongue; a housing in the mouth for securing in-mouth components, a worn video display; a device to receive data; and a processing facility, operatively connected to said sensor and operatively connected to said worn video display, for at least recognizing user directional instructions communicated by at least the path of tongue movement over said sensor and for controlling image display on said worn video display responsive to said instructions. - View Dependent Claims (32)
-
-
33. A user interface comprising:
-
a sensor located in the mouth sensitive to the locations of its contact with the tongue; a securing facility for securing in-mouth components; a device to receive data; and a processing facility, operatively connected to said sensor and to a device to receive data, for calculating, from the plurality of essentially simultaneously sensed points of tongue contact with said sensor, a single positional value representative of the approximate location of the tongue on the surface of the sensor adjusted to offset at least the results of non-planarities in the shape of the sensor array by at least linearizing readings responsive to curved paths on the known shape, and for providing data responsive to said positional value to the device to receive data;
wherebya user can control a device with actions of the tongue on a sensor that is not perfectly shaped approximately as if it was because the processing facility has adjusted the data.
-
-
34. A user interface comprising:
-
a plurality of sensors located in the mouth, which may be a plurality of sensitive locations on a single component, sensitive to the location of the tongue; and a head-position sensor for measuring the current position of the head; a device to be controlled; and a processing facility, operatively connected to said plurality of sensors, said head-position sensor, and said device to be controlled, for calculating, from the plurality of essentially simultaneously sensed points of tongue contact with said sensor, a single positional value representative of the approximate location of the tongue on the surface of the sensor and for recognizing, from the head-position, spatial instructions associated with head-positions, and for communicating data responsive to both head-position and said positional value to direct said device to be controlled.
-
-
35. A user interface for controlling a body comprising:
-
a sensor located in the mouth sensitive to the locations of its contact with the tongue; at least one body-controlling component, responsive to electronic direction, for controlling the activity of at least one body part; a processing facility, operatively connected to said sensor and to said body-controlling component, for calculating, from the plurality of essentially simultaneously sensed points of tongue contact with said sensor, a single positional value representative of the approximate location of the tongue on the surface of the sensor and creating and sending commands based on said positional value to said at least one body-controlling component to direct said at least one body part.
-
-
36. A user interface comprising:
-
a sensor located in the mouth sensitive to the locations of contact with the tongue; at least one component to be directed; a processing facility, operatively connected to said sensor and to said at least one component to be directed, for generating a value, based upon where the tongue is touching said sensor, and for directing said at least one component to be directed to follow a set of steps, and for modifying the execution of said set of steps responsive to said value thus allowing the user to change the execution in essentially real time;
wherebya potentially extremely complex set of steps can be accomplished with less required attention by the user since the user may only need to make adjustments to steps being executed.
-
-
37. A user interface comprising:
-
at least one sensor located in the mouth sensitive to contact with the tongue; an eye tracking component for identifying at least one value responsive to where the user is looking; a device to directed; and a processing facility, operatively connected to said at least one sensor, to said eye tracking component and to said device to be directed, for, in response to a tongue contact, at least directing said device to be directed to perform an action and determining what that action should be based on at least a known meaning for said at least one value at the time of said tongue contact;
wherebysaid processing facilities can determine what actions to direct based specifically upon the user'"'"'s eye-fixation-identified context at the time of said tongue contact.
-
-
38. A method for controlling a device comprising the steps of:
-
1. placing in the mouth a housing which contains a sensor sensitive to locations of its contact with the tongue 2. contacting said sensor with the tongue at a user-desired location and, where applicable, moving it, 3. processing data from said sensor to identify, from the plurality of points on said sensor that are in contact with the tongue, a single reference location value indicative of where the tongue is contacting said sensor, 4. communicating control signals, that are responsive to at least said single reference location value, to a device to be controlled 5. responding, at said device to be controlled, with the action indicated by said single reference location.
-
Specification