Multi-Modal, Geo-Tempo Communications Systems
First Claim
1. A flexible, multi-modal system useful in communications among users, capable of synchronizing real world and augmented reality, wherein the system is deployed in centralized and distributed computational platforms, the system comprising:
- (a) a multi-modal interface comprising a sensor input interface and a sensor output interface, said multi-modal interface being designed and configured to receive and generate electronic or radio signals, said interface having a modifiable default mode for types of signals received and generated thereby;
(b) a plurality of input devices designed and configured to generate signals representing speech, gestures, pointing direction, and location of a user, and transmit the same to the multi-modal interface, wherein some of the signals generated represent a message from the user intended for dissemination to other users;
(c) a plurality of agents and one or more databases,(i) wherein the databases are defined by matrices which include input commands or information from one or more input devices, and meanings for some input commands or combinations thereof;
(ii) wherein at least some of the agents are designed and configured to receive signals from the sensor input interface of the multi-modal interface, translate the signals into data, compare the same to a database, generate signals representing meanings as defined by the database, and transmit the signals to the multi-modal interface;
(iii) wherein at least one agent resides on a computational device located on a user, and at least one other agent resides on a computation device remote from the user; and
(iv) wherein at least one agent is a location-monitoring agent designed and configured to receive periodic signals representing location information of users, said signals being generated by an input device located on a user and transmitted to the multi-modal interface for further transmission to the location monitoring agent, and said agent is further designed and configured to monitor location changes of users, and in association with a database to take at least one of the following actions upon some location changes of users;
(1) generate messages for transmission to the multi-modal interface, for further transmission to one or more output devices; and
(2) modify the default mode of messages to another mode based upon the same or other location changes of users, such modification to be transmitted to the multi-modal interface to modify the default mode of messages generated thereby; and
(d) a plurality of output devices designed and configured to receive and process signals from the sensor output interface of the multi-modal interface, some of said signals representing messages to the user to be communicated by means of an output device in visual, auditory, or tactile modes.
1 Assignment
0 Petitions
Accused Products
Abstract
Disclosed is a flexible, multi-modal system useful in communications among users, capable of synchronizing real world and augmented reality, wherein the system is deployed in centralized and distributed computational platforms. The system comprises input devices to generate signals representing speech, gestures, pointing direction, and location of a user, and transmit the same to a multi-modal interface. A plurality of agents and one or more databases are integrated into the system, where at least some of the agents receive signals from the multi-modal interface, translate the signals into data, compare the same to a database, generate signals representing meanings as defined by the database, and transmit the signals to the multi-modal interface. Finally, a plurality of output devices are associated with the system to receive and process signals from the multi-modal interface, some of said signals representing messages to the user to be communicated by means of an output device.
-
Citations
16 Claims
-
1. A flexible, multi-modal system useful in communications among users, capable of synchronizing real world and augmented reality, wherein the system is deployed in centralized and distributed computational platforms, the system comprising:
-
(a) a multi-modal interface comprising a sensor input interface and a sensor output interface, said multi-modal interface being designed and configured to receive and generate electronic or radio signals, said interface having a modifiable default mode for types of signals received and generated thereby; (b) a plurality of input devices designed and configured to generate signals representing speech, gestures, pointing direction, and location of a user, and transmit the same to the multi-modal interface, wherein some of the signals generated represent a message from the user intended for dissemination to other users; (c) a plurality of agents and one or more databases, (i) wherein the databases are defined by matrices which include input commands or information from one or more input devices, and meanings for some input commands or combinations thereof; (ii) wherein at least some of the agents are designed and configured to receive signals from the sensor input interface of the multi-modal interface, translate the signals into data, compare the same to a database, generate signals representing meanings as defined by the database, and transmit the signals to the multi-modal interface; (iii) wherein at least one agent resides on a computational device located on a user, and at least one other agent resides on a computation device remote from the user; and (iv) wherein at least one agent is a location-monitoring agent designed and configured to receive periodic signals representing location information of users, said signals being generated by an input device located on a user and transmitted to the multi-modal interface for further transmission to the location monitoring agent, and said agent is further designed and configured to monitor location changes of users, and in association with a database to take at least one of the following actions upon some location changes of users;
(1) generate messages for transmission to the multi-modal interface, for further transmission to one or more output devices; and
(2) modify the default mode of messages to another mode based upon the same or other location changes of users, such modification to be transmitted to the multi-modal interface to modify the default mode of messages generated thereby; and(d) a plurality of output devices designed and configured to receive and process signals from the sensor output interface of the multi-modal interface, some of said signals representing messages to the user to be communicated by means of an output device in visual, auditory, or tactile modes. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A flexible, multi-modal system useful in communications among users, capable of synchronizing real world and augmented reality, wherein the system is deployed in centralized and distributed computational platforms, the system comprising:
-
(a) a plurality of input devices designed and configured to transmit signals representing the location of a user, speech from a user, and gestures of a user; (b) a plurality of agents designed and configured to receive signals from one or more input devices, translate said signals into system-specific data, and compare said data to a database to determine further action of the system, wherein said further action comprises (a) obtaining additional information from the user;
(b) sending a message to other users;
or (c) controlling an object designed and configured to receive signals from the system of the present invention;(c) wherein at least one agent is located on computing hardware remote from at least one other agent; and (d) wherein the plurality of agents further comprise; (i) a mapping agent, said agent comprising a geographical internal space being defined by relative local coordinates, based upon relation to a point set south-east of any point to be mapped;
said mapping agent being designed and configured to receive location information from a location sensor input of a user, and translate the same to system relative local coordinates; and(ii) a collision detection agent, said agent being coupled with a database comprising known objects or structures and their location in the geographical internal space, wherein said agent is designed and configured to receive orientation, azimuth and pitch information from the multi-modal interface or another agent, and location information from the multi-modal interface or the mapping agent, and using the location information and the direction information, determining which objects if any are intersected by a ray in the azimuth and pitch of the pointing device, originating from a vector of the user'"'"'s location at the point of orientation. - View Dependent Claims (11, 12, 13, 14, 15, 16)
-
Specification