Non-interference field-of-view support apparatus for a panoramic sensor
First Claim
1. A transportable head-mounted non-interference field-of-view apparatus comprising:
- a support housing for supporting the apparatus on a head of a user;
a physiological sensing and measurement system configured to determine a field-of-view of the user;
a field-of-view sensor assembly including;
a sensor housing;
a camera system mounted within the sensor housing, the camera system having a plurality of objective lenses facing outward and configured to provide continuous field-of-view coverage of at least the foreground scene;
a display device facing towards the user;
at least one support armature integrated with the sensor assembly and connected to the sensor assembly at a distal end of the support armature and the support housing ata proximal end of the support armature;
circuitry configured to communicatively connect the camera system to a portable electronic device;
at least one actuator attached to the support armature, the at least one actuator being configured to shift the sensor assembly between a deployed position forward of the user'"'"'s face and an alternate position, the at least one actuator being responsive to command and control signals from the portable electronic device and the sensor assembly such that the camera system is moved forward of the user'"'"'s face when a hands-free panoramic video telephone conversation is initiated; and
an image processing system for operating on field-of-view imagery provided by the user and the sensor assembly, the image processing system being configured to modify the field-of-view imagery according to the shape of the field-of-view sensor assembly, the display device being configured to display the modified imagery so as to mask the support armature and field-of-view sensor assembly between the user'"'"'s at least one eye and the foreground scene such that the support armature and field-of-view sensor assembly are hidden and do not block the user'"'"'s fine focus field-of-view.
2 Assignments
0 Petitions
Accused Products
Abstract
A support apparatus with adjustable mechanisms hold a spherical field-of-regard image and audio sensor assembly located at its distal end in front of a user'"'"'s face such that the user'"'"'s fine focus field-of-view of the foreground is not blocked by the mast or the sensor assembly. Automated mechanisms rotate and extend the armature and sensor assembly into position for face-to-face panoramic hands-free video teleconferencing, gaming, or logging. A portable wired or wireless host electronic device with a personal assistant application and sensor correlation system interactively communicates with support apparatus circuitry and servos, imagery, audio, eye and ROI tracking, neural, and user input and feedback systems to orchestrate responses to a local remote user. A sensor assembly includes a VLSIC multi-ROI processing system with integrated camera and display that hides the assembly and provides information to the user or onlooker.
-
Citations
23 Claims
-
1. A transportable head-mounted non-interference field-of-view apparatus comprising:
-
a support housing for supporting the apparatus on a head of a user; a physiological sensing and measurement system configured to determine a field-of-view of the user; a field-of-view sensor assembly including; a sensor housing; a camera system mounted within the sensor housing, the camera system having a plurality of objective lenses facing outward and configured to provide continuous field-of-view coverage of at least the foreground scene; a display device facing towards the user; at least one support armature integrated with the sensor assembly and connected to the sensor assembly at a distal end of the support armature and the support housing ata proximal end of the support armature; circuitry configured to communicatively connect the camera system to a portable electronic device; at least one actuator attached to the support armature, the at least one actuator being configured to shift the sensor assembly between a deployed position forward of the user'"'"'s face and an alternate position, the at least one actuator being responsive to command and control signals from the portable electronic device and the sensor assembly such that the camera system is moved forward of the user'"'"'s face when a hands-free panoramic video telephone conversation is initiated; and an image processing system for operating on field-of-view imagery provided by the user and the sensor assembly, the image processing system being configured to modify the field-of-view imagery according to the shape of the field-of-view sensor assembly, the display device being configured to display the modified imagery so as to mask the support armature and field-of-view sensor assembly between the user'"'"'s at least one eye and the foreground scene such that the support armature and field-of-view sensor assembly are hidden and do not block the user'"'"'s fine focus field-of-view. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A user selectable multi-mode compensation system for a sensor assembly that blocks the field-of-view of the user comprising:
-
a support housing for supporting the apparatus on a head of a user; a physiological sensing and measurement system configured to determine a field-of-view of the user; a field-of-view sensor assembly including; a sensor housing; a camera system mounted within the sensor housing, the camera system having a plurality of objective lenses facing outward and configured to provide continuous field-of-view coverage of at least the foreground scene; a display device positional forward of at least one of the users eyes;
the display device configured to operate with at least one an optical see-through head-mounted, a video see-through head-mounted, and a eye-mounted display device;a support armature integrated with the sensor assembly and connected to the sensor assembly at a distal end of the support armature and the support housing at a proximal end of the support armature; an actuator attached to the support armature, the at least one actuator being configured to shift the sensor assembly between a deployed position forward of the user'"'"'s face and an alternate position, the at least one actuator being responsive to command and control signals from a portable electronic device;
the portable electronic device operating on field-of-view imagery provided by at least one the camera system and physiological sensing and measurement system for determining a location of the sensor assembly and an offset between the user and the sensor assembly, an image processing system being configured to display the modified imagery for display responsive to the selected display system and move the support armature and field-of-view sensor assembly between the user'"'"'s at least one eye and the foreground scene such that the support armature and field-of-view sensor assembly are hidden and do not block the user'"'"'s fine focus field-of-view by at least one a first and second user selectable mode of operation including;a first mode comprising the actuator with a mechanism that positions and holds the sensor assembly in place at least outside the fine focus field-of-view of the user such that the fine focus field-of-the user is not blocked; a second mode comprising the armature with sensor assembly with actuator and holding mechanism having an safety override feature that allows the user to manually move the armature with his or her hands to an alternate position outside the fine focus field-of-view of the user; a portable electronic device included as part of the system operated by the user with selectable and programmable modes of operation compatible with at least one the display device, physiological sensing and measurement system, camera assembly, or armature such that the system is responsive to the preferences of the local or a remote user through at least one local or remote user input device or user interface;
said command and control module of the portable electronic device comprising at least one mode of operation to keep the armature and camera assembly from interfering with the view of the scene presented to the user while still maintaining panoramic offset of the camera forward of the face of a user;an image processing system for operating on field-of-view imagery provided by at least one the camera system and physiological device for determining a location of the sensor assembly and an offset between the user and the sensor assembly, the image processing system being configured to modify the field-of-view imagery according to the shape of the field-of-view sensor assembly, the display device being configured to display the modified imagery so as to mask the support armature and field-of-view sensor assembly between the user'"'"'s at least one eye and the foreground scene such that the support armature and field-of-view sensor assembly are hidden and do not block the user'"'"'s fine focus field-of-view including at least one a third and fourth user selectable mode of operation including; a third mode comprising an image mask displayed on the display device of the foreground recorded by a forward facing camera located between the eyes of the user and the sensor forward of the face of the user;
image processing and display calculating offset and eye specific imagery to be displayed on the mask presented on the display device for each eye;a fourth mode comprising an image mask displayed on the sensor assembly to hide the sensor from the view of the user with the foreground recorded by the sensor matching the view of the foreground the user observes;
image processing and display calculating offset and eye specific left and right eye imagery to be displayed on the auto-stereographic display device for presentation to the user;circuitry configured to communicatively connect the physiological sensing and measurement system, camera assembly, armature, and display to the portable electronic device;
all required system components including electrical power from at least one battery or portable electrical generation device;said first and second modes of operation typically selected by the user when generating an image mask is not selected and the third and fourth modes of operation typically selected when generating an image mask is selected to hide the armature and sensor assembly from blocking the user'"'"'s field-of-view. - View Dependent Claims (10, 11, 12)
-
-
13. A support apparatus for neural correlation in a surrounding environment comprising:
-
a support housing including a mounting structure, armature, and sensor assembly to secure the support apparatus on at least one the body of a user, eyeglasses, clothing, prosthetic device, headgear, head mounted display and as a dismounted apparatus;
said support apparatus designed in at least one a single housing or in modular separate housings;
singularly housed support apparatus components communicatively connected by the circuitry and separately housed support apparatus components communicatively connected by wireless transceivers or a wireless network;
said combined and separated embodiments of the apparatus including an electrical power source;a user borne brain activity sensing subsystem for processing, and transmitting neural activity patterns, activity, and signature data to the host computer subsystem and associated subsystems, components, and peripheral devices for storage and processing; a user sensing subsystem configured to collect data corresponding to user events and status and transfer said data to a measurement computer subsystem configured to generate data representing quantifications of perceptions of user activity;
including at least one of a biometric device for at least one tracking head and tracking eye position;a surrounding environment sensing subsystem configured to collect data corresponding to said user'"'"'s surrounding environment comprising the following; a 360 degree field-of-regard audio sensing, recording, processing, transmission, and amplifier subsystem within a surrounding environment integrated with said support housing;
said audio subsystem performing audio processing on a captured audio signal, and driving power amplification of said audio signal transmitted to a speaker or headphone;
said audio being perceptible by a user as the user moves about the environment surrounding the apparatus;
said audio sensor subsystem including a three dimensional microphone system with a plurality of small microphones facing outward from the housing that include an acoustical direction system that produces audio signatures;
said audio signatures operable upon by the host computer with cognitive memory and artificial neural networks to detect the relative azimuth, range, and elevation, and predict the identity of entities in and nature of the surrounding environment;
said apparatus operable to play said audio files to replicate the captured three dimensional sound effect by processing the sound and amplification of the sound using at least one of stereo speakers, surround-sound speakers, speaker-arrays, or headphones;a 360 degree field-of-view image sensing, recording, processing, transmission, and display subsystem which captures at least one image signal within the surrounding environment;
said image subsystem performing image processing on the captured image signal, and driving the 360 degree field-of-view signal transmitted to the display device facing outward from the periphery and included in said support housing;
said display including a user interactive touchscreen in a communicative relationship to a host computer system;
said display at least one of e-paper, LED, OLED;
a side of the display being continuously viewable and interactive with the user as the user moves about the environment surrounding the apparatus; and
allowing face-to-face interaction between the user and apparatus;
said system including at least one the ability to operate on said imagery to produce at least one monoscopic, binocular, stereoscopic, or holographic imagery for display for at least one the support apparatus or peripheral audio-visual display systems; and
including at least one visual field direction detection software, firmware, or hardware to detect from imagery a user'"'"'s or onlookers visual field of direction;
detection from imagery when said apparatus is worn or dismounted;
said apparatus operable to function as an image processing unit which performs predetermined image processing on the image captured by the 360 degree field-of-view image sensing, recording, processing, transmission, and display subsystem to determine the user or onlookers visual field direction;a recording subsystem configured to record said data from said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, and surrounding environment sensing subsystem; a host computer subsystem with a cognitive memory in a communicating relationship with the 360 degree panoramic audio and image subsystems;
the host computing subsystem including at least one artificial intelligence or an artificial intelligence-like processing system;
the host computer subsystem operating on the recorded 360 degree field-of-regard audio and 360 degree field-of-view image signals to define the physical make-up of the surrounding environment at a given place and time and identify patterns and determining relationships among users, objects, activities, preferences, and agents in the surrounding environment;
said host computer subsystem storing those patterns and relationships in a cognitive memory database that defines the surrounding environment at a given place over time;
said host computer subsystem operating on relationships said computer stores in nonvolatile memory, said computer operating on said relationships at a later time to assist an user and predict future outcomes given previous relationships stored in nonvolatile memory;
said host computing subsystem with cognitive memory including an interactive personal assistant application with a smart audio and image display user interface;
said interface operated by at least one the user, host computer, or a remote user or agent to command and control said support apparatus and prompting at least one interactive audio, image, or audio and visual presentation feedback of at least one local, live, stored, and remote content transmitted to said apparatus in order to interact with said user'"'"'s environment or a remote environment;
said user interacting with the support apparatus with 360 degree audio and image field of regard display coverage to accomplish actions with the host computer subsystem;
the host computer subsystem including at least one a telecommunications system and network with local area network and internet functionality and compatibility;
the host computer subsystem including an electrical system and circuitry to provide electricity to power electronic components of the computer and the associated 360 degree audio and image display subsystems;
said host computer including or comprising a user mobile electronic device and including;a user mobile electronic device in communication with said brain activity sensing subsystem, measurement computer subsystem, audio and image sensing subsystems, surrounding environment sensing subsystem, and recording subsystem, said user mobile electronic device including an interactive graphic user interface and being configured to; operate as a host computer processing subsystem for command, control, and processing of signals to and from said brain activity sensing subsystem, user sensing subsystem, surrounding environment sensing subsystem, and correlation subsystem; command said brain activity sensing subsystem to transmit brain activity and pattern data to said correlation subsystem; and command said user sensing subsystem and surrounding environment sensing subsystem to transmit processed sensor data to said correlation subsystem, said correlation subsystem being configured to receive and perform correlation processing operations to determine an extent of neural relationships between data received from said user mobile electronic device and said brain activity sensing subsystem, user sensing subsystem, and surrounding environment sensing subsystem to derive neural correlates of consciousness of conscious precepts of said user; a correlation subsystem incorporating cognitive memory systems storing input audio, imagery, and user brain patterns representing the user and surrounding environment at a given time and place and retrieving said data without knowledge of where stored when cognitive memory is prompted by a query pattern that is related to sought stored pattern;
a retrieval system of cognitive memory using auto-associative artificial neural networks and techniques for pre-processing a query pattern to establish relationship between a query pattern and sought stored pattern, to locate sought pattern, and to retrieve best related pattern and ancillary data;
connecting cognitive memory to a host computer and personal electronic device to deliver an output response to said query;
said stored images interrelated by second means for interrelating to said query hit, and updating old data with new data via back-propagation using an iterative process to learn and improve results over time; and
based on results configure and create correlation subsystem data, said correlation subsystem data comprising relationships between said data corresponding to said brain activity of said user and said data corresponding to said user events and surrounding environment; anda non-transitory computer readable medium configured to store data from said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and correlation subsystem for performing queries on real-time and near real-time data received from said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and correlation subsystem for determining whether to keep or disregard said data from said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and correlation subsystem based on pre-established rule-sets and user interactive command and control from said user mobile electronic device; and said system processing devices configured to process and communicate at least a portion of said data from said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystems, recording subsystem, and correlation subsystem into at least one of a said support apparatus, user conveyable system, peripheral or remote subsystem, or a recipient biological, bio-mechatronic, or mechatronic system;
said wearable device communicatively connected to said support apparatus to provide at least one sensed brain activity data, derived data, and interactive feedback to the user of the wearable device;said non-interference field-of-view support apparatus with host computer with user mobile electronic device with a personal assistant application with an artificial intelligence correlation system interactively providing panoramic sensing and feedback to at least one the user, host computer, peripheral devices, or a remote user or agent;
said user mobile device having the capability to transfer onboard processing functions to servers on the internet or another computer on a network;said support apparatus with host computer with cognitive memory and at least one artificial intelligence and artificial-like hardware, firmware or software operated to at least one construct, train, and update an already constructed relational database defining the user and the surrounding environment and in a succeeding dualistic manner in near real-time dynamically operate on said constructed relational database to assist the user in functioning in the local surrounding environment or operate on a remote environment via a telecommunication system and network. - View Dependent Claims (14, 15, 16, 17, 18, 19)
-
-
20. A panoramic non-interference field-of-view audio-visual assistant support apparatus with artificial intelligence comprising:
-
a 360 degree field-of-regard audio sensing, recording, processing, transmission, and amplifier subsystem;
said audio subsystem including a three-dimensional audio sensing module with a plurality of small microphones facing outward from a housing that includes an acoustical direction module;
said sensed audio signatures operable upon by a said acoustical direction module to detect the relative azimuth, range, and elevation, and predict the identity of entities in and nature of the surrounding environment and produce said detected data;
said acoustical direction system communicating said data to host computer cognitive memory for retrieval and correlation processing using at least one artificial neural network;
the audio subsystem operable to play audio files received from the audio subsystem and host computer to replicate typical audio files in at least one of a monaural, binaural, stereo, or three dimensional sound effect format by processing the sound and amplification of the sound using at least one of stereo speakers, surround-sound speakers, speaker-arrays, or headphones;
said personal assistant monitoring user audio and interactively providing audio to the user as the user moves about the environment surrounding the apparatus;a 360 degree field-of-view image sensing, recording, processing, transmission, and display subsystem;
said image subsystem system including a panoramic camera system;
said panoramic camera system providing at least one of a circular or, spherical field-of-view coverage about said apparatus;
the image subsystem transmitting at least some portion of the panoramic image to the host computer or samples out from a region-of-interest (ROI) sensing module that contains a database of predesignated patterns which the image sensor or image sensor identifies and samples out to send to the host computer;
said ROI image from at least one the panoramic sensor or image and transmits the ROI image to the host computer for additional processing;
said image subsystem communicating said panoramic and ROI imagery data to host computer cognitive memory for retrieval and correlation processing using at least one artificial neural network;
the image subsystem operable to receive imagery captured by the imagery subsystem or host computer and operate on said imagery to drive at least one imagery in the form of graphics, text, or video content to produce at least one monoscopic, binocular, stereoscopic, or holographic content for an associated display system;
said personal assistant monitoring imagery of the user and interactively displaying imagery to the user as the user moves about the environment surrounding the apparatus;a host computer subsystem with a cognitive memory with an artificial neural network with backpropagation integrated into the housing of the apparatus;
the artificial neural network with backpropagation operating in near-real time on audio and imagery data and information provided by the 360 degree panoramic audio and image subsystems to learn user perceptions, relationships, preferences, and nature of the user in the surrounding environment based on audio and visual information derived;
said host computer including data derived from audio and image sensor information, and derived metadata of user perceptions, preferences, relationships and nature into non-volatile memory;
said audio visual subsystem acoustical direction system and identification system and panoramic imagery and ROI imagery communicating said data to said host computer'"'"'s cognitive memory for correlation processing using at least a correlation engine with at least one of a comparator, transducer, translator, an artificial neural network, or a combination thereof;
said host computer with at least one artificial neural network hardware, firmware or software operating on sensed data to at least one construct, update, and operate on an already constructed relational database in constructive memory derived from observation by the 360 degree audio and image subsystems observing the user in the surrounding environment;
said relational database including data gathered from other various sources such as biometric sensor and the internet;
then operating on said constructed relational database to assist the user in functioning in near-real time within the local surrounding environment or on a remote environment the user is interacting in conjunction with said host computer connected telecommunication system and network;
the host computing subsystem including interactive virtual assistant functionality, natural-language user interface, and smart assistant display functionality for interactively providing panoramic sensing and feedback to at least one the user, the host computer, peripheral devices, or a remote user or agent and audio-visual presentation of local, live, stored, and remote content transmitted to and from a remote source on a telecommunications system and network in communicating relationship to the host computer system;
said host computer having at least one the software, firmware, or hardware to present said the recorded or live 360 spherical FOV panoramic image and on a said spherical or circular display, conduct multi-point teleconferencing, and display graphic and textual data;
said host computer system in a communicative relationship audio-visual system providing the content to the user based on rules established by the user or an administrator of the host computer;
said host computer system with cognitive memory including internet functionality and compatibility; and
said host computer including an electrical system to provide electricity to power electronic components of the computer and associated audio-visual subsystems;a support housing including a base structure, support armature, and sensor assembly that comprises the personal panoramic audio-visual assistant with artificial intelligence support apparatus;
said housing comprising at least one a tubular shape, spherical shape, or combination thereof with curved or flattened surfaces;
said housing including a base structure situated on or attached to an object in the surrounding environment;
said housing being positional with said 360 degree panoramic camera, display, and audio system not having moving parts;
said audio microphones and audio amplifiers, camera objective lenses and display surface facing outward the periphery situated for interaction with the user;
said apparatus including an on/off button on the exterior of the apparatus; and
ports to include headphone, an electrical recharging, and other input and output and access ports located on the periphery of the housing and accessible to the user;
said housing including at least one a plug-in electrical power or battery power supply;
said electrical power connected to the electronic components to drive the apparatus'"'"'s display, camera, and host computer system;
said housing including an internal space within the apparatus to house the host computer;
said host computer having at least one the functionality of and operating as a personal electronic device, such as a smartphone, or a port for plugging the personal electronic device so that the apparatus comprises said personal electronic device'"'"'s functionality;
at least one the host computer, personal electronic device, or a combination thereof providing command and control of the apparatus;
said display and objective lenses of the camera being secured by fastening or adhesive to the periphery of the housing;
display composition being of at least one e-paper, LED, or OLED;
said display having at least one continuous display or a plurality of displays viewable from all directions about the apparatus except the bottom where the base structure is situated on or fastened to an object in the surround environment in which the apparatus is situated;
said camera and display for optimal usage held by the support armature and situated on the support armature away from the object it is situated upon so that the viewable surface of the display and camera objective lenses are located in a non-interference field-of-view position on the periphery of the housing and for optimal usage said audio microphones and audio amplifiers are located in a non-interference field-of-regard location to facilitate user or onlooker interaction with said interactive apparatus in an optimal manner;said 360 degree field-of-regard audio sensing, recording, processing, transmission, and amplifier subsystem, 360 degree field-of-view image sensing, recording, processing, transmission, and display subsystem, and a host computer subsystem with a cognitive memory with an artificial neural network with backpropagation integrated into the housing of the apparatus communicating and operating to assist the user in functioning in the surrounding environment or a remote environment in which the user is logged. - View Dependent Claims (21, 22, 23)
-
Specification