Methods of processing data obtained from medical device
DCFirst Claim
1. A method of processing data derived from a sensor disposed on a swallowable intestinal probe, said data having been generated while said probe moved through the intestinal tract of a living being, the method comprising:
- receiving the data at a computerized device comprising a processor and at least one computer program configured to run on the processor; and
processing at least a portion of said received data using at least said processor and said at least one computer program, said processing comprising;
generating a plurality of frames from said at least portion of said received data;
using said at least one computer program to analyze said plurality of frames to identify a subset of said plurality of frames of potential interest to a user of the computerized device, the identification of the subset based at least in part on recognition of (i) a multi-sided shape, or (ii) an artifact, rendered in at least one of the plurality of frames of the subset; and
generating a display on a monitor associated with the computerized device, the display comprising at least a portion of said subset of said plurality of frames displayed in a sequence.
1 Assignment
Litigations
0 Petitions
Accused Products
Abstract
Computerized information processing methods for processing data from a medical device, such as an ingestible probe. In one embodiment, the method includes processing data obtained by the probe so as to produce a plurality of image frames, and further processing the frames to identify a plurality of frames of potential interest to a reviewer, and using the identified frames to form a preview of the entire data set. In one implementation, the processing of the data includes identification of one or more artifacts or shapes within the data associated with a medical condition, and selection of the frames for the preview includes those frames with the identified artifacts or shapes.
-
Citations
166 Claims
-
1. A method of processing data derived from a sensor disposed on a swallowable intestinal probe, said data having been generated while said probe moved through the intestinal tract of a living being, the method comprising:
-
receiving the data at a computerized device comprising a processor and at least one computer program configured to run on the processor; and processing at least a portion of said received data using at least said processor and said at least one computer program, said processing comprising; generating a plurality of frames from said at least portion of said received data; using said at least one computer program to analyze said plurality of frames to identify a subset of said plurality of frames of potential interest to a user of the computerized device, the identification of the subset based at least in part on recognition of (i) a multi-sided shape, or (ii) an artifact, rendered in at least one of the plurality of frames of the subset; and generating a display on a monitor associated with the computerized device, the display comprising at least a portion of said subset of said plurality of frames displayed in a sequence. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62)
-
2. The method of claim 1, wherein:
-
the generating a display provides the user with a preview of the data, the preview providing the user with said plurality of frames of potential interest in an automatic fashion so as to permit the user to complete their review more rapidly than could be accomplished without the preview; and the display of the at least portion of said subset comprises selecting said at least portion of said subset based at least in part on scores attributed to said plurality of frames of potential interest by said at least one computer program.
-
-
3. The method of claim 1, wherein said recognition of (i) a multi-sided shape, or (ii) an artifact, rendered in at least one of the frames of the subset, comprises algorithmically analyzing said plurality of frames to identify the (i) multi-sided shape or (ii) the artifact as correlating to data of one or more previously stored templates, the one or more previously stored templates associated with one or more respective diseases or conditions of the intestinal tract.
-
4. The method of claim 3, wherein said one or more respective diseases or conditions of the intestinal tract comprises intestinal polyps.
-
5. The method of claim 3, wherein said one or more respective diseases or conditions of the intestinal tract comprises intestinal carcinomas or ulcerations.
-
6. The method of claim 1, wherein said processing further comprises decompressing the data received at the computerized device using at least a data decompression algorithm, the data received having been compressed by a digital data processor of the swallowable intestinal probe using a data compression algorithm operative to run on the digital data processor prior to transmission to the computerized device;
- and
wherein the compression of the data reduces a communications bandwidth necessary to transmit the data over a wireless interface of the swallowable intestinal probe used to communicate with the computerized device.
- and
-
7. The method of claim 1, further comprising:
-
compressing the data derived from the sensor using a digital data processor of the swallowable intestinal probe and a data compression algorithm operative to run on the digital data processor to produce compressed data prior to transmission to the computerized device; and wherein said processing further comprises decompressing the compressed data received at the computerized device using at least a data compression algorithm operative to run on the processor thereof.
-
-
8. The method of claim 1, further comprising using a wireless data interface operating in an unlicensed portion of the radio frequency spectrum to communicate at least a portion of the data derived from the sensor with a portable electronic device of a user substantially in real time with its collection by the swallowable intestinal probe.
-
9. The method of claim 8, wherein the unlicensed portion of the radio frequency spectrum comprises a 2.4 GHz ISM (Industrial, Scientific and Medical) band, and the portable electronic device comprises a cellular telephone.
-
10. The method of claim 8, further comprising transmitting the communicated at least portion of the data derived by the sensor from the portable electronic device to the computerized device.
-
11. The method of claim 10, wherein the portable electronic device comprises an object-oriented computer application program rendered in a Java-based programming language, the object-oriented computer application program configured to communicate data via a network interface with the computerized device as part of a data session, and the method comprises establishing a data session between the portable electronic device and the computerized device for transmission of the at least portion of the data derived from the sensor.
-
12. The method of claim 10, wherein:
-
the using a wireless data interface operating in an unlicensed portion of the radio frequency spectrum to communicate at least a portion of the data derived from the sensor with a portable electronic device of a user comprises using a wireless data interface compliant with a first wireless data protocol and operating in a 2.4 GHz ISM (Industrial, Scientific and Medical) band of the portable electronic device to receive the communicated at least portion of the data derived from the sensor; and the transmitting the communicated at least portion of the data derived by the sensor from the portable electronic device to the computerized device comprises using the same wireless data interface compliant with the first wireless data protocol and operating in the 2.4 GHz ISM (Industrial, Scientific and Medical) band.
-
-
13. The method of claim 1, further comprising using a wireless data interface operating in a 2.4 GHz ISM (Industrial, Scientific and Medical) band of the radio frequency spectrum to communicate at least a portion of the data derived from the sensor directly with the computerized device.
-
14. The method of claim 1, wherein the receiving the data at the computerized device comprises;
accessing a data storage device of the swallowable intestinal probe to obtain the data, the accessing occurring after the swallowable intestinal probe has been excreted by the living being.
-
15. The method of claim 14, wherein the accessing a data storage device of the swallowable intestinal probe to obtain the data, the accessing occurring after the swallowable intestinal probe has been excreted by the living being, comprises using a device other than the computerized device to establish a data communication link with the swallowable intestinal probe in order to transfer the data from the data storage device to the other device.
-
16. The method of claim 15, wherein the data comprises data collected for storage device of the swallowable intestinal probe comprises a flash memory sized to store at least the data.
-
17. The method of claim 1, wherein said processing further comprises:
-
algorithmically analyzing a correlation between the recognized (i) multi-sided shape, or (ii) artifact and data of a previously stored template; and generating a score based at least on the algorithmic analysis for display to a user.
-
-
18. The method of claim 17, wherein said algorithmically analyzing a correlation between the recognized (i) multi-sided shape, or (ii) artifact and data of a previously stored template further comprises utilizing a plurality of nodal points associated with the (i) multi-sided shape or (ii) artifact.
-
19. The method of claim 18, wherein said plurality of nodal points associated with the (i) multi-sided shape or (ii) artifact are assigned using said at least one computer program, said assignment based at least on part on a pixel intensity analysis.
-
20. The method of claim 19, wherein said pixel intensity analysis comprises identifying at least a portion of a boundary of the (i) multi-sided shape or (ii) artifact.
-
21. The method of claim 20, wherein said identifying at least a portion of a boundary of the (i) multi-sided shape or (ii) artifact comprises identifying the at least portion of the boundary based at least in part on a change in pixel intensity as a function of spatial position within a pixel map.
-
22. The method of claim 18, wherein said processing further comprises performing at least one curve-fitting analysis based at least on the plurality of nodal points.
-
23. The method of claim 1, wherein said analyzing the plurality of frames further comprises selecting a portion of at least one of said plurality of frames for analysis based at least on a field of view (FOV) of the sensor.
-
24. The method of claim 1, wherein said analyzing the plurality of frames further comprises selecting for further analysis a region of interest within a field of view (FOV) of the sensor for at least one of said plurality of frames.
-
25. The method of claim 1, wherein said analyzing the plurality of frames further comprises accessing one or more stored representations of at least one anatomical aspect previously obtained from a human being using at least one swallowable intestinal probe for comparison to the (i) multi-sided shape or (ii) artifact, the one or more stored representations of the at least one anatomical aspect comprising visual-band imagery of one or more disease conditions of the intestinal tract.
-
26. The method of claim 1, wherein said analyzing the plurality of frames further comprises using a relative distance analysis of at least portions of the plurality of frames as part of said recognition of said (i) multi-sided shape or (ii) artifact.
-
27. The method of claim 26, wherein the using the relative distance analysis comprises using one or more ratios of distances relating to the (i) multi-sided shape or (ii) artifact.
-
28. The method of claim 1, wherein said analyzing the plurality of frames further comprises mathematically constraining the analyzing of at least one of the plurality of frames to one or more possible orientations of the (i) multi-sided shape or (ii) artifact within the intestinal tract.
-
29. The method of claim 28, wherein said mathematically constraining the analyzing of at least one of the plurality of frames to one or more possible orientations of the (i) multi-sided shape or (ii) artifact within the intestinal tract comprises:
-
using a mathematical model of the intestinal tract correlating to a cylinder; using a mathematical model of the sensor of the intestinal probe being disposed along a longitudinal axis of the probe, the longitudinal axis being coaxial or nearly coaxial with a longitudinal axis of the cylinder; and using a mathematical model of the (i) multi-sided shape or (ii) artifact being disposed on or connected to an interior surface of the cylinder.
-
-
30. The method of claim 1, wherein said recognition of (i) a multi-sided shape, or (ii) an artifact, rendered in at least one of the frames of the subset, comprises:
-
assigning a plurality of nodal points to at least a portion of a feature boundary of the (i) a multi-sided shape, or (ii) an artifact; performing curve-fitting to assign at least one mathematical function to the at least portion of the feature boundary; and determining two or more parameters of the (i) a multi-sided shape, or (ii) artifact based at least on the at least one mathematical function.
-
-
31. The method of claim 30, wherein said recognition of (i) a multi-sided shape, or (ii) artifact, rendered in at least one of the frames of the subset, further comprises searching a database of stored templates based on at least one of the two determined parameters to identify one or more prospective matches for the (i) a multi-sided shape, or (ii) artifact.
-
32. The method of claim 31, wherein said recognition of (i) a multi-sided shape, or (ii) artifact, rendered in at least one of the frames of the subset, further comprises assigning a score to at least one of the plurality of stored templates identified as one or more of the prospective matches for the (i) a multi-sided shape, or (ii) artifact.
-
33. The method of claim 1, wherein said received data comprises digital image data comprising color image data, and said method further comprises analyzing an intensity of one or more particular colors or wavelengths of light within said color image data.
-
34. The method of claim 33, wherein the one or more colors or wavelengths is/are selected from the group consisting of red, green and blue (R-G-B).
-
35. The method of claim 1, wherein said received data comprises digital image data comprising color image data, and said identification of the subset is based at least in part on analyzing an intensity of one or more particular colors or wavelengths of light within said color image data, the one or more colors or wavelengths selected from the group consisting of red, green and blue (R-G-B).
-
36. The method of claim 35, wherein:
-
said at least one computer program comprises a user interface; and said generating a display on a monitor associated with the computerized device, the display comprising at least a portion of said subset of frames displayed in a sequence comprises generating the sequence at a prescribed display rate, the display rate selectable by a reviewer via the user interface of at least one computer program.
-
-
37. The method of claim 1, wherein said plurality of frames each comprise a plurality of pixels, and said identification of the subset is based at least in part on performing a homologity analysis of the plurality of frames and a previously stored reference image or frame.
-
38. The method of claim 37, wherein said homologity analysis of the plurality of frames and a previously stored reference image or frame comprises comparing an intensity value for each of a plurality of pixels within at least one of the plurality of frames to a corresponding intensity value of a respective one of a plurality of pixels in the previously stored reference image or frame.
-
39. The method of claim 38, wherein said comparing an intensity value for each of a plurality of pixels within at least one of the plurality of frames to a corresponding intensity value of a respective one of a plurality of pixels in the previously stored reference image or frame comprises a best correction principle wherein:
-
r1(c,l) comprises a matrix of the at least one of the plurality of frames; r2(c,l) comprises a matrix of the previously stored reference image or frame; P1=(c1, l1) comprises a pixel of the at least one of the plurality of frames; P2=(c2,l2) comprises a pixel of the previously stored reference image or frame; and wherein; r1(P1)=r2(P2) for at least a portion of the plurality of pixels of the at least one frame and the previously stored reference image or frame.
-
-
40. The method of claim 39, further wherein
vP1(c,l)=vP2(c,l) for −- h≦
c≦
h, −
k≦
l≦
k,where; 1) vP1(c,l)=r1(c1+c, l1+1), and 2) vP2(c,l)=r2(c2+c, l2+1), −
h≦
c≦
h, −
k≦
l≦
k,comprise first and second local regions of pixels P1 and P2, respectively.
- h≦
-
41. The method of claim 37, wherein said homologity analysis of the plurality of frames and a previously stored reference image or frame comprises comparing an intensity value for each of a plurality of pixels within at least one of the plurality of frames to a corresponding intensity value of a respective one of a plurality of pixels in the previously stored reference image or frame, the comparing determining an index I(vP1,vP2), the index defined through at least one of:
- (i) I(vP1,vP2)=d1(vP1,vP2)=Σ
c,l|vP1(c,l)−
vP2(c,l)|; and
/or (ii) I(vP1,vP2)=Σ
c,l vP1(c,l)vP2(c,l);where vP1 and vP2 comprise first and second local regions of pixels P1 and P2, respectively.
- (i) I(vP1,vP2)=d1(vP1,vP2)=Σ
-
42. The method of claim 41, further comprising comparing the determined index to an optimum or theoretical value obtained by using two identical local regions of the at least one frame and the previously stored reference image or frame, respectively.
-
43. The method of claim 37, wherein said homologity analysis of the plurality of frames and a previously stored reference image or frame comprises comparing each of a plurality of pixels within at least one of the plurality of frames to a corresponding respective one of a plurality of pixels in the previously stored reference image or frame.
-
44. The method of claim 43, wherein said comparing each of a plurality of pixels within at least one of the plurality of frames to a corresponding respective one of a plurality of pixels in the previously stored reference image or frame comprises a best correction principle wherein:
-
r1(c,l) comprises a matrix of the at least one of the plurality of frames; r2(c,l) comprises a matrix of the previously stored reference image or frame; P1=(c1,l1) comprises a pixel of the at least one of the plurality of frames; P2=(c2,l2) comprises a pixel of the previously stored reference image or frame; and wherein; r1(P1)=r2(P2) for at least a portion of the plurality of pixels of the at least one frame and the previously stored reference image or frame.
-
-
45. The method of claim 44, further wherein
vP1(c,l)=vP2(c,l) for −-
h≦
c≦
h, −
k≦
l≦
k,where; 1) vP1(c,l)=r1(c1+c, l1+1), and 2) vP2(c,l)=r2(c2+c, l2+1), −
h≦
c≦
h, −
k≦
l≦
k,comprise first and second local regions of pixels P1 and P2, respectively.
-
h≦
-
46. The method of claim 37, wherein said homologity analysis of the plurality of frames and a previously stored reference image or frame comprises comparing each of a plurality of pixels within at least one of the plurality of frames to a corresponding respective one of a plurality of pixels in the previously stored reference image or frame, the comparing determining an index I(vP1,vP2), the index defined through a distance (d) of the form:
- d1(vP1,vP2)=Σ
c,l|vP1(c,l)−
vP2(c,l)|;where vP1 and vP2 comprise first and second local regions of pixels P1 and P2, respectively.
- d1(vP1,vP2)=Σ
-
47. The method of claim 1, wherein the sensor comprises an imaging sensor, and the intestinal probe further comprises an accelerometer apparatus configured to detect acceleration of at least the intestinal probe, and based on said detection, cause at least initiation of imagery collection by the sensor;
- and
wherein the analysis comprises analyzing a plurality of frames associated with data derived from imagery collected only after said initiation.
- and
-
48. The method of claim 1, wherein the sensor disposed on the swallowable intestinal probe comprises an imagery sensor, and wherein:
-
the intestinal probe comprises an accelerometer apparatus configured to detect acceleration of at least the intestinal probe; and the method further comprises; detecting, using at least said accelerometer apparatus, acceleration of at least the intestinal probe; causing, based at least on the detecting, initiation of imagery collection by the imagery sensor; converting at least a portion of the collected imagery to digital data; and compressing the digital data before transmission off of the intestinal probe; and wherein the analysis comprises analyzing only those of the plurality of image frames associated with data converted and compressed from imagery collected after said initiation.
-
-
49. The method of claim 1, wherein said generating a display on a monitor associated with the computerized device, the display comprising at least a portion of said subset of frames displayed in a sequence comprises generating the sequence at greater than or equal to a prescribed minimum display rate, the prescribed minimum display rate selected based at least on being able to detect motion-related artifact relating to one or more anatomical features of the intestinal tract during review.
-
50. The method of claim 1, further comprising selectively deleting or omitting, using the at least one computer program, at least one of said plurality of frames before said generating a display so as to expedite user review of the received data.
-
51. The method of claim 50, wherein the selectively deleting or omitting of at least one of said plurality of frames comprises selectively deleting or omitting according to a prescribed temporal window or number of frames.
-
52. The method of claim 51, wherein the prescribed temporal window or number of frames comprises a prescribed number of frames before a user-selected frame of interest in a sequence of the plurality of frames, and a prescribed number of frames after the user-selected frame of interest, at least one of the prescribed number of frames before the frame of interest, and at least one of the prescribed number of frames after the frame of interest, being temporally contiguous with the frame of interest.
-
53. The method of claim 50, wherein the selectively deleting or omitting of at least one of said frames before causing display of remaining ones of the plurality of frames so as to expedite review of the received data comprises selectively deleting or omitting according to a user-prescribed sampling rate.
-
54. The method of claim 50, wherein the selectively deleting or omitting of at least one of said frames before causing display of remaining ones of the plurality of frames so as to expedite review of the received data comprises selectively deleting or omitting based at least on an analysis of an intensity of one or more particular colors or wavelengths of light within said received data, the one or more colors or wavelengths selected from the group consisting of red, green and blue (R-G-B).
-
55. The method of claim 50, wherein the selectively deleting or omitting of at least one of said frames before causing display of remaining ones of the plurality of frames so as to expedite review of the received data comprises selectively deleting or omitting based at least on analyzing a similarity of one or more portions of at least one of said plurality of frames as compared to another frame.
-
56. The method of claim 50, wherein the selectively deleting or omitting of at least one of said frames before causing display of remaining ones of the plurality of frames so as to expedite review of the received data comprises selectively deleting or omitting based on analyzing one or more portions of at least one of said plurality of frames, the analyzing one or more portions comprising identifying of one or more portions of the received data corresponding to a reduced probability of disease relative to other portions, the selectively deleting or omitting comprising deleting or omitting of one or more frames associated with said identified one or more portions.
-
57. The method of claim 1, wherein:
-
the generating a plurality of frames from said at least portion of said received data comprises generating a first plurality of frames associated with a first portion of the intestinal tract; the using said at least one computer program to analyze said plurality of frames to identify a subset of said plurality of frames of potential interest to a user of the computerized device, the identification of the subset based at least in part on recognition of (i) a multi-sided shape, or (ii) an artifact, rendered in at least one of the frames of the subset, comprises using said at least one computer program to analyze said first plurality of frames to identify a first subset of said first plurality of frames of potential interest to the user, the identification of the subset based at least in part on recognition of (i) the multi-sided shape, or (ii) the artifact, rendered in at least one of the frames of the first subset; and wherein the method further comprises; generating a second plurality of frames associated with a second portion of the intestinal tract, the second portion of the intestinal tract being subsequent to the first portion in terms of travel of the swallowable intestinal probe; and using said at least one computer program to analyze said second plurality of frames to identify a second subset of said second plurality of frames of potential interest to a user of the computerized device based at least in part on a comparison of the second plurality of frames to the recognized (i) multi-sided shape, or (ii) artifact rendered in at least one of the frames of the first subset; and wherein the generating a display on a monitor associated with the computerized device, the display comprising at least a portion of said subset of frames displayed in a sequence, comprises displaying at least a portion of the second subset of frames as part of a sequence, the sequence also comprising at least a portion of the first subset of frames, the at least portion of the first subset of frames displayed within the sequence before the at least portion of the second subset of frames.
-
-
58. The method of claim 1, wherein said generating a display on a monitor associated with the computerized device, the display comprising at least a portion of said subset of frames displayed in a sequence, further comprises selectively causing alteration of a display rate of at least a portion of the sequence on the monitor so as to expedite review of the received at least portion of image data.
-
59. The method of claim 58, wherein said selectively causing alteration of the display rate is based at least on analyzing a similarity of one or more portions of at least one of said plurality of frames as compared to another frame within said plurality of frames.
-
60. The method of claim 58, wherein said selectively causing alteration of the display rate is based at least on analyzing one or more portions of at least one of said plurality of frames, the analyzing comprising identifying one or more portions of the received data corresponding to a low probability of disease, the selectively causing alteration comprising increasing said display rate during said identified one or more portions.
-
61. The method of claim 1, further comprising:
-
accessing, using at least said at least one computer program, one or more stored data representations of one or more of (i) a multi-sided shape, or (ii) an artifact previously obtained from the same living being; and performing an algorithmic comparison of the recognized (i) multi-sided shape, or (ii) artifact, present in the received data, and the one or more stored data representations of one or more of (i) a multi-sided shape, or (ii) an artifact previously obtained from the same living being.
-
-
62. The method of claim 1, further comprising using the recognition of the (i) multi-sided shape, or (ii) artifact, as a gating or “
- go/no-go”
criterion for further analytical steps to be performed by the at least one computer program.
- go/no-go”
-
2. The method of claim 1, wherein:
-
-
63. A method of processing data derived from a visual band sensor disposed on an autonomous intestinal probe, said data having been generated while said probe moved autonomously through the intestinal tract of a human being, the method comprising:
-
receiving the data derived from the visual band sensor at a computerized device comprising a display device, a processor and at least one computer program configured to run on the processor; and processing at least a portion of said received data using at least said processor and said at least one computer program, said processing comprising; evaluating the at least a portion of the received data using said at least one computer program to identify at least a first portion of the received data that is of possible interest to a human reviewer, said identification comprising analyzing said at least portion of the received data to identify one or more artifacts corresponding to one or more known physiologic conditions associated with an intestinal tract of a human being; and at least initially only presenting said first portion to the human reviewer for further review via the display device; wherein said evaluating is performed automatically by said at least one computer program as part of said processing. - View Dependent Claims (64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135)
-
64. The method of claim 63, wherein said analyzing comprises identifying one or more characteristic shapes associated with said one or more known physiologic conditions using one or more computer algorithms.
-
65. The method of claim 64, wherein said analyzing further comprises applying one or more mathematical corrections or adjustments for a visual perspective associated with the visual band sensor when the data was generated thereby.
-
66. The method of claim 63, wherein said identification is based at least in part on a multi-nodal spatial analysis of the at least portion of the received data.
-
67. The method of claim 63, wherein the processing further comprises generating a plurality of frames from the at least portion of the received data, and the identification of the first portion comprises selecting a subset of the plurality of frames based at least in part on scores attributed to said plurality of frames of by said at least one computer program.
-
68. The method of claim 67, wherein at least some of said scores are based at least in part on recognition of (i) a multi-sided shape, or (ii) an artifact, rendered in at least one of the plurality of frames, the recognition comprising algorithmically analyzing said plurality of frames to identify the (i) multi-sided shape or (ii) the artifact as correlating to data of one or more previously stored templates, the one or more previously stored templates associated with the one or more know physiological conditions of the intestinal tract, the one or more known conditions of the intestinal tract comprising at least one of:
- (i) polyps;
(ii) carcinomas;
or (iii) ulcerations.
- (i) polyps;
-
69. The method of claim 64, wherein said processing further comprises decompressing the data received at the computerized device using at least a data decompression algorithm, the data received having been compressed by a digital data processor of the autonomous intestinal probe using a data compression algorithm operative to run on the digital data processor prior to transmission to the computerized device, the compression of the data reducing a communications bandwidth necessary to transmit the data over a wireless interface of the autonomous intestinal probe used to communicate with the computerized device.
-
70. The method of claim 63, further comprising:
-
compressing the data derived from the visual band sensor using a digital data processor of the autonomous intestinal probe and a data compression algorithm operative to run on the digital data processor to produce compressed data prior to transmission to the computerized device; and wherein said processing further comprises decompressing the compressed data received at the computerized device using at least a data compression algorithm operative to run on the processor thereof.
-
-
71. The method of claim 63, further comprising:
-
using a wireless data interface operating in an unlicensed portion of the radio frequency spectrum to communicate at least a portion of the data derived from the visual band sensor with a portable electronic device of a user substantially in real time with its collection by the swallowable intestinal probe, the unlicensed portion of the radio frequency spectrum comprising a 2.4 GHz ISM (Industrial, Scientific and Medical) band, the portable electronic device comprising a cellular telephone; and transmitting the communicated at least portion of the data derived by the sensor from the portable electronic device to the computerized device.
-
-
72. The method of claim 71, wherein the portable electronic device comprises an object-oriented computer application program rendered in a Java-based programming language, the object-oriented computer application program configured to communicate data via a network interface with the computerized device as part of a data session, and the method comprises establishing a data session between the portable electronic device and the computerized device for transmission of the at least portion of the data derived from the visual band sensor.
-
73. The method of claim 63, further comprising:
-
using a wireless data interface compliant with a first wireless data protocol and operating in a 2.4 GHz ISM (Industrial, Scientific and Medical) band of a portable electronic device to receive a transmitted at least portion of the data derived from the sensor; and transmitting the communicated at least portion of the data derived by the sensor from the portable electronic device to the computerized device comprises using the same wireless data interface compliant with the first wireless data protocol and operating in the 2.4 GHz ISM band.
-
-
74. The method of claim 63, wherein the receiving the data at the computerized device comprises;
accessing a data storage device of the autonomous intestinal probe to obtain the data, the accessing occurring after the autonomous intestinal probe has been excreted by the human being.
-
75. The method of claim 74, wherein the accessing a data storage device of the autonomous intestinal probe to obtain the data, the accessing occurring after the autonomous intestinal probe has been excreted by the human being, comprises using a device other than the computerized device to establish a data communication link with the autonomous intestinal probe in order to transfer the data from the data storage device to the other device.
-
76. The method of claim 75, wherein the data comprises data collected for storage device of the autonomous intestinal probe comprises a flash memory sized to store at least the data.
-
77. The method of claim 63, wherein said processing further comprises:
algorithmically analyzing a correlation between a recognized (i) multi-sided shape, or (ii) artifact with the at least portion of the received data, and data of a previously stored template utilizing a plurality of nodal points associated with the (i) multi-sided shape or (ii) artifact.
-
78. The method of claim 77, wherein said plurality of nodal points associated with the (i) multi-sided shape or (ii) artifact are assigned using said at least one computer program, said assignment based at least on part on analyzing a pixel intensity analysis to identify at least a portion of a boundary of the (i) multi-sided shape or (ii) artifact.
-
79. The method of claim 78, wherein:
-
the processing further comprises generating a plurality of frames from the at least portion of the received data; and said identification of at least a portion of a boundary of the (i) multi-sided shape or (ii) artifact comprises identifying the at least portion of the boundary based at least in part on a change in pixel intensity as a function of spatial position within at least one of the plurality of frames.
-
-
80. The method of claim 79, wherein said processing further comprises performing at least one curve-fitting analysis based at least on the plurality of nodal points.
-
81. The method of claim 63, wherein:
-
the processing further comprises generating a plurality of frames from the at least portion of the received data; and said analyzing said at least portion of the received data for one or more artifacts corresponding to one or more known physiologic conditions associated with an intestinal tract of a human being further comprises algorithmically selecting a portion of at least one of said plurality of frames for analysis that is smaller than a complete field of view (FOV) of the sensor.
-
-
82. The method of claim 81, wherein said algorithmically selecting a portion of at least one of said plurality of frames for analysis that is smaller than a complete field of view (FOV) of the sensor comprises using multi-scale image processing to identify one or more features of interest.
-
83. The method of claim 63, wherein:
-
the processing further comprises generating a plurality of frames from the at least portion of the received data; and said analyzing said at least portion of the received data for one or more artifacts corresponding to one or more known physiologic conditions associated with an intestinal tract of a human being further comprises using multi-scale image processing to identify at least portions of the one or more artifacts within one or more of the plurality of frames.
-
-
84. The method of claim 83, wherein the using multi-scale image processing to identify at least portions of the one or more artifacts comprises performing the multi-scale image processing selectively only on portions of the one or more of the plurality of frames.
-
85. The method of claim 83, wherein the using multi-scale image processing to identify at least portions of the one or more artifacts comprises performing the multi-scale image processing to identify at least portions of the one or more artifacts which are not within a focal region of the visual band sensor.
-
86. The method of claim 63, wherein:
-
the processing further comprises generating a plurality of frames from the at least portion of the received data, each of the plurality of frames having a plurality of pixels; and said analyzing said at least portion of the received data for one or more artifacts corresponding to one or more known physiologic conditions associated with an intestinal tract of a human being further comprises algorithmically analyzing at least one intensity of at least a portion of the plurality of pixels in at least one of the plurality of frames to identify at least portions of the one or more artifacts within the at least one frame.
-
-
87. The method of claim 86, wherein the algorithmically analyzing at least one intensity of at least a portion of the plurality of pixels in at least one of the plurality of frames to identify at least portions of the one or more artifacts within the at least one frame comprises analyzing an absolute value of at least one grayscale value of each of the at least portion of the plurality of pixels to identify a boundary of the one or more artifacts.
-
88. The method of claim 86, wherein the algorithmically analyzing at least one intensity of at least a portion of the plurality of pixels in at least one of the plurality of frames to identify at least portions of the one or more artifacts within the at least one frame comprises analyzing at least relative or absolute R-G-B (red-green-blue) values of each of the at least portion of the plurality of pixels to identify the one or more artifacts.
-
89. The method of claim 88, wherein the absolute or relative R-G-B values of each of the at least portion of the plurality of pixels comprise respective multi-bit binary digital value representing respective R, G, or B intensity for each pixel.
-
90. The method of claim 86, wherein the algorithmically analyzing at least one intensity of at least a portion of the plurality of pixels in at least one of the plurality of frames to identify at least portions of the one or more artifacts within the at least one frame comprises correlating the at least one intensity to a distance of the at least portions of the one or more artifacts within the intestinal tract from the visual band sensor when the data was obtained thereby.
-
91. The method of claim 63, wherein said evaluating comprises:
-
accessing a stored representation of at least one anatomical aspect previously obtained from a human being using at least one autonomous swallowable intestinal probe for comparison to the one or more artifacts, the stored representation of the at least one anatomical aspect comprising visual-band imagery data of a disease condition of the intestinal tract; and algorithmically comparing at least a portion of the at least portion of received data to the accessed stored representation to determine a level of similarity.
-
-
92. The method of claim 91, wherein said algorithmically comparing at least a portion of the at least portion of received data to the accessed stored representation to determine a level of similarity comprises:
-
identifying a plurality of nodal points associated with the one or more artifacts; evaluating a degree of mathematical correlation of the plurality of nodal points to respective ones of nodal points of the accessed stored representation; and generating a score or index reflective of the evaluated degree of correlation.
-
-
93. The method of claim 91, wherein said algorithmically comparing at least a portion of the at least portion of received data to the accessed stored representation to determine a level of similarity comprises:
-
identifying one or more spatial or dimensional parameters associated with the one or more artifacts; evaluating a degree of mathematical correlation of the one or more parameters to respective ones of one or more parameters of the accessed stored representation; and generating a score or index reflective of the evaluated degree of correlation.
-
-
94. The method of claim 93, wherein:
-
said identifying one or more spatial or dimensional parameters associated with the one or more artifacts comprises calculating at least one first ratio of a first dimension of the one or more artifacts to a second dimension of the one or more artifacts; and said evaluating a degree of mathematical correlation of the one or more parameters to respective ones of one or more parameters of the accessed stored representation comprises calculating at least one second ratio of a first dimension of the accessed stored representation to a second dimension of the accessed stored representation, and comparing the at least one first ratio to the at least one second ratio.
-
-
95. The method of claim 63, further comprising performing, using at least one computer program, a point-to-point matching analysis, the point-to-point matching analysis configured to automatically identify a plurality of homologous pairs of data, the pairs of data comprising data from (i) the one or more frames, and (ii) a corresponding stored data file representative of a known physiological condition or disease.
-
96. The method of claim 63, wherein the at least a portion of the received data comprises a plurality of frames each comprising a plurality of pixels, and said identification of the subset is based at least in part on performing a homologity analysis of the plurality of frames and a previously stored reference image or frame.
-
97. The method of claim 96, wherein said homologity analysis of the plurality of frames and a previously stored reference image or frame comprises comparing an intensity value for each of a plurality of pixels within at least one of the plurality of frames to a corresponding intensity value of a respective one of a plurality of pixels in the previously stored reference image or frame.
-
98. The method of claim 97, wherein said comparing an intensity value for each of a plurality of pixels within at least one of the plurality of frames to a corresponding intensity value of a respective one of a plurality of pixels in the previously stored reference image or frame comprises a best correction principle wherein:
-
r1(c,l) comprises a matrix of the at least one of the plurality of frames; r2(c,l) comprises a matrix of the previously stored reference image or frame; P1=(c1, l1) comprises a pixel of the at least one of the plurality of frames; P2=(c2,l2) comprises a pixel of the previously stored reference image or frame; and wherein; r1(P1)=r2(P2) for at least a portion of the plurality of pixels of the at least one frame and the previously stored reference image or frame.
-
-
99. The method of claim 98, further wherein
vP1(c,l)=vP2(c,l) for −-
h≦
c≦
h, −
k≦
l≦
k,where; 1) vP1(c,l)=r1(c1+c, l1+1), and 2) vP2(c,l)=r2(c2+c, l2+1), −
h≦
c≦
h, −
k≦
l≦
k,comprise first and second local regions of pixels P1 and P2, respectively.
-
h≦
-
100. The method of claim 96, wherein said homologity analysis of the plurality of frames and a previously stored reference image or frame comprises comparing an intensity value for each of a plurality of pixels within at least one of the plurality of frames to a corresponding intensity value of a respective one of a plurality of pixels in the previously stored reference image or frame, the comparing determining an index I(vP1,vP2), the index defined through a distance of the form:
- d1(vP1,vP2)=Σ
c,l|vP1(c,l)−
vP2(c,l)|;where vP1 and vP2 comprise first and second local regions of pixels P1 and P2, respectively.
- d1(vP1,vP2)=Σ
-
101. The method of claim 100, further comprising comparing the determined index to an optimum or theoretical value obtained by using two identical local regions of the at least one frame and the previously stored reference image or frame, respectively.
-
102. The method of claim 96, wherein said homologity analysis of the plurality of frames and a previously stored reference image or frame comprises comparing an intensity value for each of a plurality of pixels within at least one of the plurality of frames to a corresponding intensity value of a respective one of a plurality of pixels in the previously stored reference image or frame, the comparing determining an index of the form I(vP1,vP2)=Σ
- c,l vP1(c,l)vP2(c,l);
where vP1 and vP2 comprise first and second local regions of pixels P1 and P2, respectively.
- c,l vP1(c,l)vP2(c,l);
-
103. The method of claim 102, further comprising comparing the determined index to an optimum or theoretical value obtained by using two identical local regions of the at least one frame and the previously stored reference image or frame, respectively.
-
104. The method of claim 96, wherein said homologity analysis of the plurality of frames and a previously stored reference image or frame comprises comparing each of a plurality of pixels within at least one of the plurality of frames to a corresponding respective one of a plurality of pixels in the previously stored reference image or frame.
-
105. The method of claim 104, wherein said comparing each of a plurality of pixels within at least one of the plurality of frames to a corresponding respective one of a plurality of pixels in the previously stored reference image or frame comprises a best correction principle wherein:
-
r1(c,l) comprises a matrix of the at least one of the plurality of frames; r2(c,l) comprises a matrix of the previously stored reference image or frame; P1=(c1, l1) comprises a pixel of the at least one of the plurality of frames; P2=(c2,l2) comprises a pixel of the previously stored reference image or frame; and wherein; r1(P1)=r2(P2) for at least a portion of the plurality of pixels of the at least one frame and the previously stored reference image or frame.
-
-
106. The method of claim 105, further wherein
vP1(c,l)=vP2(c,l) for −-
h≦
c≦
h, −
k≦
l≦
k,where; 1) vP1(c,l)=r1(c1+c, l1+1), and 2) vP2(c,l)=r2(c2+c, l2+1), −
h≦
c≦
h, −
k≦
l≦
k,comprise first and second local regions of pixels P1 and P2, respectively.
-
h≦
-
107. The method of claim 96, wherein said homologity analysis of the plurality of frames and a previously stored reference image or frame comprises comparing each of a plurality of pixels within at least one of the plurality of frames to a corresponding respective one of a plurality of pixels in the previously stored reference image or frame, the comparing determining an index I(vP1,vP2), the index defined through a distance of the form:
- d1(vP1,vP2)=Σ
c,l|vP1(c,l)−
vP2(c,l)|;where vP1 and vP2 comprise first and second local regions of pixels P1 and P2, respectively.
- d1(vP1,vP2)=Σ
-
108. The method of claim 63, wherein said evaluating comprises:
-
accessing a stored representation of at least one anatomical aspect for comparison to the one or more artifacts, the stored representation of the at least one anatomical aspect comprising a data template of a disease condition of the intestinal tract; and algorithmically comparing at least a portion of the at least portion of received data to the accessed stored representation to determine a level of similarity.
-
-
109. The method of claim 108, wherein said algorithmically comparing at least a portion of the at least portion of received data to the accessed stored representation to determine a level of similarity comprises:
-
identifying a plurality of nodal points associated with the one or more artifacts; evaluating a degree of mathematical correlation of the plurality of nodal points to respective ones of nodal points of the accessed stored representation; and generating a score or index reflective of the evaluated degree of correlation.
-
-
110. The method of claim 108, wherein said algorithmically comparing at least a portion of the at least portion of received data to the accessed stored representation to determine a level of similarity comprises:
-
identifying one or more spatial or dimensional parameters associated with the one or more artifacts; evaluating a degree of mathematical correlation of the one or more parameters to respective ones of one or more parameters of the accessed stored representation; and generating a score or index reflective of the evaluated degree of correlation.
-
-
111. The method of claim 110, wherein:
-
said identifying one or more spatial or dimensional parameters associated with the one or more artifacts comprises calculating at least one first ratio of a first dimension of the one or more artifacts to a second dimension of the one or more artifacts; and said evaluating a degree of mathematical correlation of the one or more parameters to respective ones of one or more parameters of the accessed stored representation comprises calculating at least one second ratio of a first dimension of the accessed stored representation to a second dimension of the accessed stored representation, and comparing the at least one first ratio to the at least one second ratio.
-
-
112. The method of claim 63, wherein said evaluating comprises:
-
calculating at least one value for one or more spatial or dimensional parameters associated with the one or more artifacts; and searching a database, the database comprising data associating the one or more spatial or dimensional parameters to known physiological conditions or diseases, the searching to identify data associated with one or more of the known physiological conditions or diseases matching the calculated at least one value.
-
-
113. The method of claim 112, wherein said calculating at least one value for one or more spatial or dimensional parameters associated with the one or more artifacts comprises calculating at least one first ratio of a first dimension of the one or more artifacts to a second dimension of the one or more artifacts.
-
114. The method of claim 112, wherein said matching the calculated at least one value comprises the calculated at least one value being within a prescribed threshold value of a corresponding value associated with the identified data of the database.
-
115. The method of claim 63, wherein said evaluating comprises:
-
mathematically characterizing one or more spatial or dimensional parameters associated with the one or more artifacts to create a model; and performing a mathematical rotation of the model about one or more prescribed axes of rotation; and evaluating a correlation of the rotated model to data contained in a shape database, the shape database comprising a plurality of shape data corresponding to one or more known physiologic conditions or diseases of the intestinal tract of a human being.
-
-
116. The method of claim 115, wherein said
mathematically characterizing one or more spatial or dimensional parameters associated with the one or more artifacts to create a model comprises calculating one or more dimensions of the one or more artifacts based on the at least portion of the received data. -
117. The method of claim 116, wherein said calculating one or more dimensions of the one or more artifacts based on the at least portion of the received data comprises calculating two or more dimensions, and the evaluating the at least portion of the received data further comprises calculating at least one ration from the two or more calculated dimensions.
-
118. The method of claim 117, wherein said evaluating the at least portion of the received data further comprises iteratively:
- (i) performing the rotation, and (ii) evaluating the correlation of the rotated model for that iteration.
-
119. The method of claim 63, wherein:
-
the processing further comprises generating a plurality of frames from the at least portion of the received data; the analyzing said at least portion of the received data to identify one or more artifacts corresponding to one or more known physiologic conditions associated with an intestinal tract of a human being comprises analyzing individual ones of the plurality of frames; and said analyzing individual ones comprises mathematically constraining the analyzing of at least one of the plurality of frames to only one or more physically possible orientations of the one or more artifacts.
-
-
120. The method of claim 119, wherein said mathematically constraining the analyzing of at least one of the plurality of frames to one or more physically possible orientations comprises:
-
using a mathematical model of the intestinal tract correlating to a cylindrical shape; using a mathematical model of the visual band sensor of the intestinal probe being disposed along a longitudinal axis of the probe, the longitudinal axis being disposed within an interior of the cylindrical shape; and using a mathematical model of the one or more artifacts being disposed on or connected to an interior surface of the cylindrical shape.
-
-
121. The method of claim 63, wherein said analyzing said at least portion of the received data to identify one or more artifacts corresponding to one or more known physiologic conditions associated with an intestinal tract of a human being, comprises:
-
assigning a plurality of nodal points to at least a portion of a feature boundary of at least one of the one or more artifacts; performing curve-fitting to assign at least one mathematical function to the at least portion of the feature boundary; and determining two or more parameters of the at least one artifact based at least on the at least one mathematical function.
-
-
122. The method of claim 63, wherein:
-
the processing further comprises generating a plurality of frames from the at least portion of the received data; and said at least initially only presenting said first portion to the human reviewer for further review via the display device comprises displaying a subset of the plurality of frames, the subset associated with the first portion, the subset of the plurality of frames displayed in a sequence at a prescribed display rate, the display rate selected to enable detection of motion-related artifact relating to one or more anatomical features of the intestinal tract.
-
-
123. The method of claim 63, wherein:
-
the processing further comprises generating a plurality of image data frames from the at least portion of the received data; and said at least initially only presenting said first portion to the human reviewer for further review via the display device comprises displaying a subset of the plurality of frames, the subset associated with the first portion only, the subset of the plurality of frames displayed in a temporal sequence, so as to expedite review by the human reviewer of the received data.
-
-
124. The method of claim 123, wherein:
-
the processing further comprises generating a plurality of image data frames from the at least portion of the received data; and said at least initially only presenting said first portion to the human reviewer via the display device comprises displaying a subset of the plurality of frames in a temporal sequence, the displaying further comprising selectively deleting or omitting of at least one of said plurality of frames according to a prescribed temporal window or number of frames.
-
-
125. The method of claim 124, wherein the prescribed temporal window or number of frames comprises a prescribed number of frames before a user-selected frame of interest in the sequence, and a prescribed number of frames after the user-selected frame of interest, at least one of the prescribed number of frames before the frame of interest, and at least one of the prescribed number of frames after the frame of interest, being temporally contiguous with the frame of interest.
-
126. The method of claim 123, wherein:
-
the processing further comprises generating a plurality of image data frames from the at least portion of the received data; and said at least initially only presenting said first portion to the human reviewer for further review via the display device comprises displaying a subset of the plurality of frames in a temporal sequence, the displaying further comprising selectively deleting or omitting of at least one of said plurality of frames according to a user-prescribed sampling rate.
-
-
127. The method of claim 63, further comprising:
-
generating a first plurality of frames from said at least portion of said received data, said first plurality of frames associated with a first portion of the intestinal tract; wherein the evaluating the at least a portion of the received data using said at least one computer program comprises identifying a subset of said first plurality of frames of possible interest to the human reviewer, the identification of the subset based at least on identifying one or more artifacts rendered in at least one of the frames of the first subset; and generating a second plurality of frames associated with a second portion of the intestinal tract, the second portion of the intestinal tract being subsequent to the first portion in terms of travel of the intestinal probe; and using said at least one computer program to analyze said second plurality of frames to identify a second subset of said second plurality of frames of possible interest to a user of the computerized device based at least in part on a comparison of the second plurality of frames to the identified one or more artifacts rendered in at least one of the frames of the first subset; and wherein the presenting said first portion further comprises presenting at least a portion of the second subset of frames as part of a sequence, the sequence also comprising at least a portion of the first subset of frames, the at least portion of the first subset of frames displayed within the sequence before the at least portion of the second subset of frames.
-
-
128. The method of claim 63, wherein:
-
the processing further comprises generating a plurality of image data frames from the at least portion of the received data; said analyzing said at least portion of the received data to identify one or more artifacts corresponding to one or more known physiologic conditions associated with an intestinal tract of a human being comprises analysis of an intensity of one or more particular colors or wavelengths of light within said at least portion of said received data, the one or more colors or wavelengths selected from the group consisting of red, green and blue (R-G-B); and said at least initially only presenting said first portion to the human reviewer for further review via the display device comprises displaying a subset of the plurality of frames, the subset selected based at least on said analysis of an intensity.
-
-
129. The method of claim 63, wherein:
-
the processing further comprises generating a plurality of image data frames from the at least portion of the received data; said analyzing said at least portion of the received data to identify one or more artifacts corresponding to one or more known physiologic conditions associated with an intestinal tract of a human being comprises analyzing a similarity of one or more portions of at least one of said plurality of frames as compared to another of said plurality of frames; and said at least initially only presenting said first portion to the human reviewer via the display device comprises displaying only a subset of the plurality of frames in a temporal sequence, the subset selected based at least on said analyzing a similarity.
-
-
130. The method of claim 129, wherein the analyzing a similarity of one or more portions of at least one of said plurality of frames as compared to another of said plurality of frames comprises analyzing an intensity of one or more particular colors or wavelengths of light within said one or more portions of said at least one frame as compared to one or more portions of said another frame, the one or more colors or wavelengths selected from the group consisting of red, green and blue (R-G-B).
-
131. The method of claim 129, wherein the analyzing a similarity of one or more portions of at least one of said plurality of frames as compared to another of said plurality of frames comprises analyzing a recognized shape identified within said one or more portions of said at least one frame as compared to one or more portions of said another frame.
-
132. The method of claim 63, wherein:
-
the processing further comprises generating a plurality of image data frames from the at least portion of the received data; said analyzing said at least portion of the received data to identify one or more artifacts corresponding to one or more known physiologic conditions associated with an intestinal tract of a human being comprises identifying a plurality of frames having the one or more artifacts therein; and said at least initially only presenting said first portion to the human reviewer via the display device comprises displaying only a subset of the plurality of frames in a temporal sequence, the subset including only the identified plurality of frames.
-
-
133. The method of claim 63, wherein:
-
the processing further comprises generating a plurality of image data frames from the at least portion of the received data; and said at least initially only presenting said first portion to the human reviewer comprises displaying only a subset of the plurality of frames displayed in a sequence, and selectively causing increase of a display rate of at least a portion of the sequence on the display device so as to expedite review of the received at least portion of image data by more rapidly moving through the at least portion of the sequence, the at least portion of the sequence associated with a reduced probability of occurrence of said one or more artifacts than a remainder of the sequence.
-
-
134. The method of claim 63, further comprising automatically using the identification of the one or more artifacts, as a gating or “
- go/no-go”
criterion for further analytical steps to be performed by the at least one computer program, the further analytical steps related to an identity of the identified one or more artifacts.
- go/no-go”
-
135. The method of claim 134, wherein the identity of the identified one or more artifacts comprises a polyp, and the further steps comprise at least an algorithmic shape analysis of the polyp.
-
64. The method of claim 63, wherein said analyzing comprises identifying one or more characteristic shapes associated with said one or more known physiologic conditions using one or more computer algorithms.
-
-
136. A method of processing data derived from a sensor disposed on a first intestinal probe, said data having been generated while said first intestinal probe moved autonomously through an intestinal tract of a living being, the method comprising:
-
receiving the data at a first computerized device comprising a processor, a network interface, and at least one computer program configured to run on the processor, the computerized device disposed at a centralized network location that is configured to at least establish digital data communication with a plurality of intestinal probes, and via at least said digital data communication, receive and store respective data generated thereby; and transmitting, via at least the network interface, at least a portion of the data derived from the sensor disposed on the first intestinal probe to a second computerized device comprising a display device, a processor, a network interface, and at least one computer program configured to run on the processor, the second computerized device configured for processing said transmitted at least portion of the data using at least said processor and said at least one computer program of the second computerized device, said processing by said second computerized device comprising; evaluating a plurality of image frames associated with the transmitted at least portion of the data to identify at least a first portion of the plurality of image frames that is of possible interest to a human reviewer based at least on the presence of one or more physiological features or artifacts, the presence of the one or more artifacts based at least on algorithmically analyzing the plurality of image frames; and selectively presenting only said first portion of the plurality of image frames to the human reviewer for further review via the display device. - View Dependent Claims (137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155)
-
137. The method of claim 136, wherein the receiving the data at the first computerized device comprises;
accessing a data storage device of the swallowable intestinal probe via a digital data communication protocol to obtain the data, the accessing occurring after the swallowable intestinal probe has been excreted by the living being.
-
138. The method of claim 137, wherein the accessing a data storage device of the swallowable intestinal probe to obtain the data, the accessing occurring after the swallowable intestinal probe has been excreted by the living being, comprises using a device other than the first computerized device to establish the digital data communication with the swallowable intestinal probe in order to transfer the data from a flash memory of the swallowable intestinal probe to the other device.
-
139. The method of claim 136, wherein the receiving the data at a first computerized device comprises receiving the data at a network computerized device configured to authenticate at least a portion of the received data, the authentication based at least in part on one or more authentication data packets received by the network computerized device in association with the at least portion of the received data.
-
140. The method of claim 136, wherein the transmitted at least portion of the data comprises digital image data comprising color image data, and said evaluating comprises analyzing an intensity of one or more particular colors or wavelengths of light within said color image data, the one or more colors or wavelengths selected from the group consisting of red, green and blue (R-G-B).
-
141. The method of claim 136, wherein the transmitted at least portion of the data comprises digital image data comprising color image data, and said identification of the at least portion of the plurality of image frames is based at least in part on analyzing an intensity of one or more particular colors or wavelengths of light within said color image data, the one or more colors or wavelengths selected from the group consisting of red, green and blue (R-G-B).
-
142. The method of claim 136, wherein the at least one computer program is configured to utilize multiple software threads or objects, each of the multiple software threads or objects supporting respective portions of the display device as part of said selectively presenting, each of the multiple software threads or objects being independently controllable by at least the human reviewer.
-
143. The method of claim 136, wherein said plurality of image frames each comprise a plurality of pixels, and said identification of the at least portion of the plurality of image frames is based at least in part on performing a homologity analysis of the plurality of image frames and a previously stored reference image or frame.
-
144. The method of claim 143, wherein said homologity analysis of the plurality of image frames and a previously stored reference image or frame comprises comparing an intensity value for each of a plurality of pixels within at least one of the plurality of image frames to a corresponding intensity value of a respective one of a plurality of pixels in the previously stored reference image or frame.
-
145. The method of claim 143, wherein said homologity analysis of the plurality of frames and a previously stored reference image or frame comprises comparing each of a plurality of pixels within at least one of the plurality of frames to a corresponding respective one of a plurality of pixels in the previously stored reference image or frame, the comparing comprising determining an index I(vP1,vP2), the index defined through a distance of the form:
- d1(vP1,vP2)=Σ
c,l|vP1(c,l)−
vP2(c,l)|;where vP1 and vP2 comprise first and second local regions of pixels P1 and P2, respectively.
- d1(vP1,vP2)=Σ
-
146. The method of claim 145, further comprising comparing the determined index to an optimum or theoretical value obtained by using two identical local regions of the at least one frame and the previously stored reference image or frame, respectively.
-
147. The method of claim 136, further comprising selectively deleting or omitting, using the at least one computer program, at least one of said plurality of image frames before said selectively presenting so as to further expedite user review of the received data.
-
148. The method of claim 147, wherein the selectively deleting or omitting of at least one of said plurality of image frames comprises selectively deleting or omitting according to a prescribed temporal window or number of image frames.
-
149. The method of claim 148, wherein the prescribed temporal window or number of image frames comprises a prescribed number of image frames before a user-selected image frame of interest in a sequence of the plurality of image frames, and a prescribed number of image frames after the user-selected image frame of interest, at least one of the prescribed number of image frames before the image frame of interest, and at least one of the prescribed number of image frames after the image frame of interest, being temporally contiguous with the image frame of interest.
-
150. The method of claim 147, wherein the selectively deleting or omitting of at least one of said image frames comprises selectively deleting or omitting according to a user-prescribed sampling rate.
-
151. The method of claim 147, wherein the selectively deleting or omitting of at least one of said image frames comprises selectively deleting or omitting based at least on an analysis of an intensity of one or more particular colors or wavelengths of light within certain ones of the plurality of image frames, the one or more colors or wavelengths selected from the group consisting of red, green and blue (R-G-B).
-
152. The method of claim 147, wherein the selectively deleting or omitting comprises selectively deleting or omitting based at least on analyzing a similarity of one or more portions of at least one of said plurality of image frames as compared to another frame.
-
153. The method of claim 147, wherein the selectively deleting or omitting of at least one of said frames comprises selectively deleting or omitting based on analyzing one or more portions of at least one of said plurality of image frames, the analyzing one or more portions comprising identifying of one or more portions of the plurality of image frames corresponding to a reduced probability of disease relative to other portions, the selectively deleting or omitting comprising deleting or omitting of one or more image frames associated with said identified one or more portions.
-
154. The method of claim 136, wherein the first intestinal probe comprises an accelerometer apparatus configured to detect acceleration of at least the first intestinal probe, and based on said detection, cause at least initiation of collection of the signals by the sensor;
- and
wherein the evaluating comprises evaluating a plurality of image frames associated with signals collected only after said initiation.
- and
-
155. The method of claim 136, wherein:
-
the first intestinal probe comprises an accelerometer apparatus configured to detect acceleration of at least the first intestinal probe; and the method further comprises; detecting, using at least said accelerometer apparatus, acceleration of at least the first intestinal probe; causing, based at least on the detecting, initiation of collection of the signals by the sensor; converting at least a portion of the collected signals to the data; and wherein the evaluating comprises evaluating only those of the plurality of image frames associated with data converted from signals collected after said initiation.
-
-
137. The method of claim 136, wherein the receiving the data at the first computerized device comprises;
-
-
156. A method of processing data derived from signals of a visual-band sensor disposed on a swallowable intestinal probe, said signals having been generated while said swallowable intestinal probe moved autonomously through the intestinal tract of a living being, the method configured to enhance an efficiency of review of the data by a human reviewer, the method comprising:
-
processing, at a computerized device comprising a display device, a processor, and at least one computer program configured to run on the processor, at least a portion of said data, the processing using at least said processor and said at least one computer program, said at least a portion of said data comprising a plurality of frames, said processing comprising; automatically selecting a subset of said plurality of frames that are likely to be of interest to the reviewer, the automatically selecting being without user input identifying the subset and based at least in part on detecting the presence of a physiological artifact rendered within at least a first one of the plurality of frames, the automatically selecting of the subset further comprising selecting one or more frames temporally contiguous to said first one of the frames; and generating a display on the display device, the display comprising at least two of said subset of said plurality of frames displayed simultaneously as a temporal sequence in respective ones of at least two contiguous regions of a display screen of the display device. - View Dependent Claims (157, 158, 159, 160, 161, 162, 163, 164, 165)
-
157. The method of claim 156, wherein:
-
the at least two of said subset of said plurality of frames displayed simultaneously in respective ones of at least two contiguous regions of the display screen of the display device comprise the first one of the plurality of the frames, and two frames temporally contiguous to said first one of the frames; and the simultaneous display as a temporal sequence comprises display of additional frames both temporally before and after the two temporally contiguous frames, the method further comprising selecting the additional frames based on a prescribed temporal window or duration value.
-
-
158. The method of claim 156, wherein the detecting, using the at least one computer program, the presence of a physiological artifact rendered within at least a first one of the plurality of frames, comprises detecting the presence of the physiological artifact based at least on a pixel intensity analysis of at least a region of the at least first one of the plurality of frames.
-
159. The method of claim 158, wherein the data comprises color image data, and the pixel intensity analysis of at least a region of the at least first one of the plurality of frames comprises analysis of at least one or more particular colors or wavelengths of light within said color image data, the one or more colors or wavelengths is/are selected from the group consisting of red, green and blue (R-G-B).
-
160. The method of claim 158, wherein the pixel intensity analysis of at least a region of the at least first one of the plurality of frames comprises analysis of at least one spatial boundary or interface associated with the physiological artifact.
-
161. The method of claim 158, wherein the detecting, using the at least one computer program, the presence of a physiological artifact rendered within at least a first one of the plurality of frames, further comprises detecting the presence of the physiological artifact based at least on a parametric analysis of at least a region of the at least first one of the plurality of frames, the boundary or interface enabling at least in part the determination of one or more parameters for the parametric analysis.
-
162. The method of claim 161, wherein the one or more parameters for the parametric analysis comprise one or more dimensions of the physiological artifact, and the parametric analysis comprises comparison of the one or more dimensions to corresponding one or more data values of a template data structure accessible to the computerized device.
-
163. The method of claim 161, wherein the one or more parameters for the parametric analysis comprise two or more dimensions of the physiological artifact, and the parametric analysis comprises comparison of one or more ratios formed by the two or more dimensions to corresponding two or more data values of a template data structure accessible to the computerized device.
-
164. The method of claim 156, wherein the wherein the detecting, using the at least one computer program, the presence of a physiological artifact rendered within at least a first one of the plurality of frames, comprises detecting the presence of the physiological artifact based at least on performing a homologity analysis of at least a region of the at least first one of the plurality of frames with respect to data of a previously stored image.
-
165. The method of claim 164, wherein the homologity analysis of at least a region of the at least first one of the plurality of frames with respect to data of a previously stored image comprises a pixel-by-pixel intensity comparison, and the method further comprises generating a score or metric of the degree of matching between the data and the data of the previously stored image based at least on the comparison.
-
157. The method of claim 156, wherein:
-
-
166. A method of processing data derived from a sensor disposed on a swallowable intestinal probe, said data having been generated while said probe moved through the intestinal tract of a living being, the method comprising:
-
a step for receiving the data at a computerized device; and a step for processing at least a portion of said received data using at least said computerized device, said step for processing comprising; a step for generating a plurality of image frame means from said at least portion of said received data; a step for analyzing said plurality of image frame means, the analyzing for identifying a subset of said plurality of image frame means of potential interest to a user of the computerized device, the identifying of the subset based at least in part on a step for recognition of (i) a multi-sided shape, or (ii) a physiological artifact, rendered in at least one of the plurality of frame means of the subset; and a step for generating a display on a monitor associated with the computerized device, the step for generating comprising a step for displaying at least a portion of said subset of frame means in a prescribed sequence.
-
Specification
- Resources
-
Current AssigneeWest View Research LLC
-
Original AssigneeWest View Research LLC
-
InventorsGazdzinski, Robert F.
-
Primary Examiner(s)Johns, Andrew W
-
Application NumberUS14/585,050Publication NumberTime in Patent Office1,170 DaysField of SearchUS Class CurrentCPC Class CodesA61B 1/00009 of image signals during a u...A61B 1/00016 using wireless meansA61B 1/0002 provided with data storagesA61B 1/00032 internally poweredA61B 1/00036 Means for power saving, e.g...A61B 1/0004 for electronic operationA61B 1/0005 combining images e.g. side-...A61B 1/00059 provided with identificatio...A61B 1/00156 using self propulsionA61B 1/00158 using magnetic fieldA61B 1/041 Capsule endoscopes for imagingA61B 1/043 for fluorescence imagingA61B 1/0638 providing two or more wavel...A61B 1/273 for the upper alimentary ca...A61B 1/31 for the rectum, e.g. procto...A61B 10/02 Instruments for taking cell...A61B 18/20 using laserA61B 2562/162 Capsule shaped sensor housi...A61B 5/0071 by measuring fluorescence e...A61B 5/0075 by spectroscopy, i.e. measu...A61B 5/0084 : for introduction into the b...A61B 5/073 : Intestinal transmittersA61B 5/411 : Detecting or monitoring all...A61B 5/4255 : Intestines, colon or appendixA61B 5/6861 : Capsules, e.g. for swallowi...A61B 5/7232 : involving compression of th...A61B 5/7282 : Event detection, e.g. detec...A61B 8/12 : in body cavities or body tr...A61B 8/4245 : involving determining the p...A61B 8/4472 : Wireless probesA61B 8/4488 : the transducer being a phas...A61B 8/56 : Details of data transmissio...A61N 2/002 : in combination with another...A61N 2005/1005 : with asymmetrical radiation...A61N 5/1014 : Intracavitary radiation the...G06T 2207/30028 : Colon; Small intestineG06T 7/20 : Analysis of motion motion e...G16Z 99/00 : Subject matter not provided...H04L 1/004 : by using forward error cont...