Method and mechanism for identifying protecting, requesting, assisting and managing information
0 Assignments
0 Petitions
Accused Products
Abstract
A method and mechanism used for or by, individuals, groups, corporations, (profit and non profit), professionals, businesses, schools, governments, institutions, and machines to help, protect, create, design, record, file, document, publish, authenticate, plan, market, personalize, distribute, manufacture, license, franchise, sponsor, advertise, publicize, track, project, calendar, voting, rate, conference, rent, lease, price, request, propose, order, pay, fund, buy, sell, lend, bank, finance, store, insure, barter, gift, repair, service, build, share, check, insure, install, learn, teach, research, promote, collaborate, adopt, think, monitor, manage, remind, administer, broadcast, target spatial points, communicate, manage energy communication, connect like minded people, offer self help, manage group health, combined with data aggregated from a vision engine, hearing engine, touch engine, taste engine, smell engine, with data from the mechanical internet, the life Internet, and fuzzy logical, abductive, inductive, deductive, and backward chaining reasoning, to give forward suggestions, proposals, decisions, information and search in a network and non network.
93 Citations
37 Claims
-
1. (canceled)
-
2. (canceled)
-
3. (canceled)
-
4. (canceled)
-
5. (canceled)
-
6. (canceled)
-
7. (canceled)
-
8. (canceled)
-
9. (canceled)
-
10. (canceled)
-
11. (canceled)
-
12. (canceled)
-
13. (canceled)
-
14. (canceled)
-
15. (canceled)
-
16. (canceled)
-
17. (canceled)
-
18. (canceled)
-
19. (canceled)
-
20. (canceled)
-
21. A method for human identification;
- registration; and
protection executed by a computer comprising;A. providing a 3D camera;
audio and video recorder;
a viewer;
a RFID tag reader a coder and decoder; and
an infrared reader;initiating a motion detector; aggregating and comparing background with foreground automatically for 3D verification; determining an infrared distance between the camera and a person; forming a pixel box around any moving objects; centering crosshairs on the nose of a person; locking on to the face of the person; storing the data aggregated in video form; extracting and storing a first image; transforming the image utilizing a brightness modified interpolation method; B. initiating a modified color interpolation method on the images; converting the images to vector line art and storing the files; establishing the center locked on point and identifying human eyes as a marker for all other processes; locking on the image with 16 pixels around the edge of the person'"'"'s face; storing the image file for further analysis; recording the streamed video with an audio phrase spoken by the person; C. converting the original 3D video audio files and sending the images to a process server for storing the images; providing a server for storage;
security;
human key and tracking features;transforming the images utilizing the brightness modified interpolation method initiated on the original images; storing the data with a plurality of levels each way darker and lighter; transforming the images utilizing a color modified interpolation method; storing the data with a plurality of levels of red, green and blue color up and down from actual original image color data; using a Fourier wave transformation to convert the images; mapping out the pixel position of the converted images into a RGB grid PPM file; converting the files pixels into RGB numbers and then storing the data for later analysis D. converting the audio files into an audio wave form pattern and storing the data; converting the audio into images and storing the files; performing a Fourier wave transformation on the original files and storing the files; mapping out the pixel position into a RGB grid PPM file; converting the pixels into RGB numbers and storing the data for later analysis; E. converting original video files into images; extracting a plurality of images at the beginning of audio phrase spoken; extracting a plurality of images at selected mark of audio phrase start; extracting a plurality of images backward at end of audio phrase stop; storing the extracted images; transforming the images with a brightness modified interpolation method which is initiated on the images and storing the files in a plurality of levels each way darker and lighter; using a color modified interpolation method on the images and storing the files in a plurality of levels of red, green and blue color up and down from actual original image color data; performing a Fourier wave transformation and storing the data; mapping out the pixel position of the image data into a RGB grid PPM file; converting the pixels into RGB numbers; storing the data for later analysis; F. extracting a plurality of random image slices from the original video file; extracting an image slice from the beginning of the audio track where an audio phrase starts; extracting an image slice from the middle of the audio track phrase; extracting an image slice from the end of audio phrase spoken track; storing the image slices; transforming the images utilizing the brightness modified interpolation method; storing the data in a plurality of individual levels each way darker and lighter; initiating the color modified interpolation method; storing the data in a plurality of levels of red, green and blue color up and down from actual original image color data; converting the images into an octal dump; storing the octal dump for later analysis; G. extracting images created from the original video and storing the files; transforming the images utilizing the brightness modified interpolation method initiated on the images; storing the data in a plurality of levels each way darker and lighter; initiating the color modified interpolation method on the images and storing the data in a plurality of levels of red, green and blue color up and down from actual original image color data; transforming the images by performing an average color matrix analysis; storing the resulting images; converting the data to PPM files and storing the files; processing the PPM files using the brightness modified interpolation method; processing the data in a plurality of levels on all pixels and saving the data into files for later pattern analysis; H. verifying that the original video was of an actual 3D live human; using specific point movement analysis that compares the background to the foreground identifying movement patterns; extracting images from the original video; transforming the images utilizing the brightness modified interpolation method; storing the data in a plurality of levels each way darker and lighter; initiating the color modified interpolation method on the images and storing the data in a plurality of levels of red, green and blue color up and down from the actual original image color data; reducing the images to just two colors for evaluation; storing those images if the object is a real 3D live human for further analysis; storing audio from the video; processing the video and audio; verifying the files; storing the files during the verification state; determining the video and audio spatial point targets utilizing sound wave analysis and infrared distance analysis information taken from the storing for later analysis; I. then the Human Semantic Phrase Comparative Analysis ā
Gā
Processor mechanism;comparing the original audio to a phrase typed at registration and spoken; converting the audio into an image file; converting the file using a Fourier wave transformation and storing the file for further analysis; converting the image file to a PPM file; mapping the RGB coordinates of the PPM file into a pattern matching grid; populating the grid and storing if for later analysis; J. taking original video and original audio input and storing the files; extracting images from the original video files and stored, transforming the images utilizing the brightness modified interpolation method which is initiated on the images; storing the data in a plurality of levels each way darker and lighter; initiating the color modified interpolation method on the images; storing the data in a plurality of levels of red, green and blue color up and down from the actual original image color data; performing a Fourier wave transformation on the images; converting the original audio files using a Fourier wave transformation into specimen files and storing the files; converting all the specimen files into an octal dump; performing code matching on the octal dump files and storing in a database for later analysis; K. taking original video and original audio and storing the files; initiating the Fourier wave transformation on the files and storing the converted files; creating a 3D model; determining spatial points from the 3D model; performing analysis of the spatial points and calculating the files into numbers and storing for later analysis; L. receiving audio input storing it into files; overlaying with the octal dump from step F with the populated grid from step I; mapping out a grid into a file and then storing the file; converting the file into numbers and storing for later analysis; M. taking original video and audio input and storing it into files; transforming the video into images tracked and encoded with audio and storing the files; transforming the image utilizing the brightness modified interpolation method which initiated on the images; storing the data in a plurality of levels each way darker and lighter; initiating the color modified interpolation method on the images; storing the data in a plurality of levels of red, green and blue color up and down from actual original image color data; converting the video and audio files using a Fourier wave transformation; performing a maximum distance analysis; performing a mean distance analysis; performing a mathematical error/data fit analysis; performing a average color matrix analysis; performing a fractal dimensions comparisons analysis; performing an audio wave form pattern analysis; storing the data; converting the audio into images tracked and encoded with video; mapping out the pixel position into a grid; converting the image file to a PPM file; converting the pixels into RGB numbers; storing the data for later analysis; N. taking original video and storing it into files; converting the video into images and storing the image files; transforming the image utilizing the brightness modified; initiating a brightness interpolation method on the images; storing the data in a plurality of levels each way darker and lighter; initiating the color modified interpolation method is initiated on the images; storing the data in a plurality of levels of red, green and blue color up and down from actual original image color data; converting the files into gray scale images and storing the files; converting the files into vector line art images that are stored, mapping out the pixel positions of the vector line art images into a grid and storing the files; converting the pixels into numbers; and
then storing the data for later analysis;O. taking original video input and storing it into files; converting original video into images and storing the files; transforming the images utilizing the brightness modified interpolation method which is initiated on the images; storing the data in a plurality of levels each way darker and lighter; initiating the color modified interpolation method on the images; storing the data in a plurality of levels of red, green and blue color up and down from actual original image color data; converting the resulting video image files using a Fourier wave transformation mechanism; converting the files into a plurality of gray scale color images; storing the images; mapping out the pixel position into a grid and storing the image files as PPM files; converting the pixels of the files into RGB numbers;
then the mechanism storing the data for later analysis;P. taking original video input and storing it into files; converting original video into images; mapping the images to the audio that is input and is tracked and storing the files; mapping out the pixel positions into a grid with a plurality of bands; extracting images from band areas after audio phrase speaking starts and storing the files; transforming the image utilizing the brightness modified interpolation method which is initiated on the images; storing the data in a plurality of levels each way darker and lighter; initiating the color modified interpolation method on the images; storing the data in a plurality of levels of red, green and blue color up and down from actual original image color data; creating a Fourier wave transformation form analysis file and storing it; creating levels of lightness and darkness; converting the image file to a PPM file and it is stored, converting the pixels into RGB numbers patterns; storing the data for analysis; Q. taking the original video input and storing it into files; converting the original video into images and storing the images; transforming the images utilizing the brightness modified interpolation method which is initiated on the images; storing the data in a plurality of levels each way darker and lighter; initiating the color modified interpolation method on the images; storing the data in a plurality of levels red, green and blue color up and down from actual image data; calculating a pixel edge around facial features and storing the files; locking in on the human eyes and the tip of the nose; calculating and triangulating distance from the center point of eye to eye and from center point of nose; mapping out the pixel position of the edge pixels and then the eye areas and nose areas into a grid and storing the files; converting the image file to a PPM file and storing it; converting the pixels into RGB numbers and then storing the data for later analysis; R. taking original video and audio input and storing it into files; extracting the audio from a video left and right stereo; saving the data as two separate files; initiating the Fourier wave transformation mechanism converting files to wave form data; and
storing the files as image files;converting the files data to numbers and storing the data; overlaying and aligning the wave forms from the two image files and storing the files; analyzing aligned wave form data and creating new numbers and storing the files; analyzing 3D distance variations for a test of real object and not flat field object; S. comparing information from the registration and the sign in files; calculating percentages that are positive matches versus percentages that are negative matches for verification; T. registering a human key by looking into the crosshairs on the display screen and speaks a phrase recorded by a person with audio and video; storing and analyzing the audio and video; confirming a uniquely identified registered user; using the system to lock;
protect, and unlock single items;U. confirming a person looking at the cam and while talking or saying a phrase and recording background objects; determining whether an object being viewed by the camera is a 3 dimensional object for verification; comparing the camera results and analyzing the files in an overlay pixel pattern analysis method; calculating position of forward focused objects; calculating position and depth of background focused objects; calculating the difference between forward focused objects and background focused objects and determining a value; using the value to determine a 3-D preliminary security decision; creating an audio voice print with input; comparing a first audio print to the last audio print; making a file security decision; - View Dependent Claims (22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37)
- registration; and
Specification