Image annotation in image-guided medical procedures
First Claim
1. A method for image annotation in image guided medical procedures, comprising:
- determining, with one or more computing devices, position and orientation of a first medical device based at least on data from a first 3D tracking unit coupled with the first medical device;
automatically selecting, using the one or more computing devices, a 2D plane within a 3D space based at least on the position and orientation of the first medical device;
selecting visualizable medical data in the 2D plane from a 3D medical data set of a patient based at least on the position and orientation of the first medical device;
displaying on one or more displays the 3D space and the visualizable medical data in the 2D plane;
determining, with the one or more computing devices, position and orientation of a second medical device over time based at least on data from a second 3D tracking unit coupled with the second medical device;
generating an annotation in the 3D space on the visualizable medical data in the 2D plane based at least on the position and orientation of the second medical device over time;
generating image guidance information, with the one or more computing devices, based at least on the annotation in the 3D space; and
displaying, on one or more displays, a graphical rendering of the image guidance information.
1 Assignment
0 Petitions
Accused Products
Abstract
Presented herein are methods, systems, devices, and computer-readable media for image annotation in image-guided medical procedures. Some embodiments herein allow physicians or other operators to use one or more medical devices in order to define annotations in 3D space. These annotations may later be displayed to the physician or operator in 3D space in the position in which they were first drawn or otherwise generated. In some embodiments, the operator may use various available medical devices, such as needles, scalpels, or even a finger in order to define the annotation. Embodiments herein may allow an operator to more conveniently and efficiently annotate visualizable medical data.
378 Citations
21 Claims
-
1. A method for image annotation in image guided medical procedures, comprising:
-
determining, with one or more computing devices, position and orientation of a first medical device based at least on data from a first 3D tracking unit coupled with the first medical device; automatically selecting, using the one or more computing devices, a 2D plane within a 3D space based at least on the position and orientation of the first medical device; selecting visualizable medical data in the 2D plane from a 3D medical data set of a patient based at least on the position and orientation of the first medical device; displaying on one or more displays the 3D space and the visualizable medical data in the 2D plane; determining, with the one or more computing devices, position and orientation of a second medical device over time based at least on data from a second 3D tracking unit coupled with the second medical device; generating an annotation in the 3D space on the visualizable medical data in the 2D plane based at least on the position and orientation of the second medical device over time; generating image guidance information, with the one or more computing devices, based at least on the annotation in the 3D space; and displaying, on one or more displays, a graphical rendering of the image guidance information. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 21)
-
-
11. A system for image annotation in image guided medical procedures, comprising one or more computing devices, said computing devices being configured to:
-
determine position and orientation of a first medical device based at least on data from a first 3D tracking unit coupled with the first medical device; select a 2D plane of visualizable medical data from a 3D medical data set of a patient based at least on the position and orientation of the first medical device; determine position and orientation over time for a second medical device based at least on data from a second 3D tracking unit coupled with the second medical device; generate an annotation in 3D space on the visualizable medical data of the 2D plane of visualizable medical data based at least on the position and orientation over time for the second medical device; generate image guidance information based at least on the annotation in 3D space; and display, on one or more displays, a graphical rendering of the image guidance information. - View Dependent Claims (12, 13, 14, 15)
-
-
16. A non-transient computer-readable medium comprising computer-executable instructions for image annotation in image guided medical procedures, said computer-executable instructions, when executing on one or more computing devices, cause the one or more computing devices to:
-
determine position and orientation of a first medical device; select one or more planes of visualizable medical data of a patient based at least on the position and orientation of the first medical device; determine position and orientation over time for a second medical device; generate an annotation in the 3D space based on the position and orientation over time of the second medical device and the one or more planes of visualizable medical data; generate image guidance information, with the one or more computing devices, based at least on the annotation in the 3D space; and display, on one or more displays, a graphical rendering of the image guidance information. - View Dependent Claims (17, 18, 19)
-
Specification