Video endoscopic system
First Claim
Patent Images
1. A method comprising:
- accessing, by an application stored in a non-transitory memory of a computing device and executable by a processor, image data representing an image captured by a video endoscopic device, wherein the image data is encoded in a first color space;
converting, by the application, the accessed image data from the first color space to a second color space, wherein the second color space is different than the first color space;
identifying, by the application, a location of a feature in the image by analyzing the image data in the second color space via;
grouping, by the application, pixels in the image data in the second color space into a plurality of groups based on hue values of the pixels;
determining, by the application, a first group of pixels from among the plurality of groups of pixels;
determining by the application, a second group of pixels from among the plurality of groups of pixels; and
selecting, by the application, one of the first or second group of pixels based on a relative color difference between the first and second groups of pixels;
storing, by the application, segmentation data that indicates the location of the feature in the image, wherein the segmentation data indicates the selected group of pixels; and
displaying, by the application, based on the segmentation data, the image with an indication of the identified location of the feature.
1 Assignment
0 Petitions
Accused Products
Abstract
Image data representing an image captured by a video endoscopic device is converted from a first color space to a second color space. The image data in the second color space is used to determine the location of features in the image.
-
Citations
30 Claims
-
1. A method comprising:
-
accessing, by an application stored in a non-transitory memory of a computing device and executable by a processor, image data representing an image captured by a video endoscopic device, wherein the image data is encoded in a first color space; converting, by the application, the accessed image data from the first color space to a second color space, wherein the second color space is different than the first color space; identifying, by the application, a location of a feature in the image by analyzing the image data in the second color space via; grouping, by the application, pixels in the image data in the second color space into a plurality of groups based on hue values of the pixels; determining, by the application, a first group of pixels from among the plurality of groups of pixels; determining by the application, a second group of pixels from among the plurality of groups of pixels; and selecting, by the application, one of the first or second group of pixels based on a relative color difference between the first and second groups of pixels; storing, by the application, segmentation data that indicates the location of the feature in the image, wherein the segmentation data indicates the selected group of pixels; and displaying, by the application, based on the segmentation data, the image with an indication of the identified location of the feature. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A system comprising:
-
a video endoscopic device configured to; generate image data representing an image captured by the video endoscopic device, wherein the image data is encoded in a first color space; transmit the image data to a computing device; and a computing device configured to; receive the image data transmitted by the video endoscopic device; convert the received image data from the first color space to a second color space, wherein the second color space is different than the first color space; identify a location of a feature in the image by analyzing the image data in the second color space, wherein to identify the location of the feature in the image by analyzing the image data in the second color space, the computing device is configured to; group pixels in the image data in the second color space into groups based on hue values of the pixels; determine a first group of pixels from among the groups of pixels; determine a second group of pixels from among the groups of pixels; and select one of the first or second group of pixels based on a relative color difference between the first and second groups of pixels, wherein the segmentation data indicates the selected group of pixels; store segmentation data that indicates the location of the features in the image; and display, based on the segmentation data, the image on a display device with an indication of the identified location of the features. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A method comprising:
-
accessing, by an application stored in a non-transitory memory of a computing device and executable by a processor, image data representing video captured by a video endoscopic device, wherein the image data is encoded in a first color space; converting, by the application, the accessed image data from the first color space to a second color space, wherein the second color space is different than the first color space; identifying, by the application, a location of a landmark feature in the video by analyzing the image data in the second color space; tracking, by the application, a position of the landmark feature over multiple frames of the image data; generating, by the application, an anatomical model based on the tracked landmark feature; determining, by the application, a location of a target anatomical feature in the video data based on the anatomical model; displaying, by the application, the video with an indication of the location of the target feature. - View Dependent Claims (16, 17, 18, 19, 20, 21, 22)
-
-
23. A system comprising:
-
a video endoscopic device configured to; generate image data representing video captured by the video endoscopic device, wherein the image data is encoded in a first color space; and transmit the image data to a computing device; and a computing device configured to; receive the image data transmitted by the video endoscopic device; convert the received image data from the first color space to a second color space, wherein the second color space is different than the first color space; identify a location of a landmark feature in the video by analyzing the image data in the second color space; track a position of the landmark feature over multiple frames of the image data; generate an anatomical model based on the tracked landmark feature; determine a location of a target anatomical feature in the video data based on the anatomical model; and display the video on a display device with an indication of the location of the target feature. - View Dependent Claims (24, 25, 26, 27, 28, 29, 30)
-
Specification