All right, moving on to um our next paper. Um this is now our final society IPEG. Um and we have Mark Ryan um from Scan to Scalpel, creating surgical blueprints with 3D imaging, Children's Mercy Kansas City. This is Mark Ryan and today I'll be talking about different ways that you can apply 3D reconstruction to plan for complex surgical procedures. So what you see here is a 3D representation of the normal views you have with a CT scan of the chest. The coronal, the sagittal, and the axial. For any given slice you have a plane of pixels. If each of these slices is a pancake, then the volumetric rendering is basically looking at the entire stack of pancakes. The software uses light reflection and shading techniques to create the illusion of depth. Uh what you can't do with volume rendering is select individual structures or edit them independently. To do that, you have to perform segmentation, which involves selecting the specific pixels for the structure of interest for each slice. So now you can select the cartilage or the ribs or the vascular structures. And it's not as detailed, but you can rely on volumetric rendering for detail in those types of images. So what you get is you get a very high level of detail volumetric rendering, but when two objects are a similar density, it can be really hard to tell them apart. To give an example of how I use volume rendering, skeletal structures are fairly easy to isolate since they're much higher density than everything else. In this case, I measured the angle of the osteotomy that needed to be performed on the sternum and the distance of the cut from the manubrium so that I could plan for placement of bars. The main advantage of volume rendering is that it's fast. So even in a complex case like these conjoint twins, you can model very rapidly and plan the procedure or just have a basic reference for the operating room. The advantage of segmentation is the ability to select and model individual structures. So if you want to get rid of the left fourth rib, you can do that as long as it's been segmented individually. But as you can see in the video on the right, creating and editing these models can be very time consuming. So that's where AI comes in. Uh these images are from a free program called 3D slicer and there's an extension to that program called total segmentator. And so within 75 seconds it can read the scan and give me models for 106 different structures in the body. From there you can select the segments or structures you don't want and remove them and you're able to make the remaining structures transparent. So now you have anatomic landmarks. From there you can reinsert the volume rendering with all the detail. And in this case, we whittle it down to just the vascular structures and now you have anatomic landmarks with detailed vasculature. And in this case, I'm using it to plan where I'm going to transect the pancreas adjacent to a neoplasm in the body. And with minimal input on the part of the surgeon, you have a pretty detailed road map of where you're going to be going and what you're going to be doing during the case. This is another combined volume rendering and segmentation road map that we used for a 17-year-old girl with severe pectus carinatum. So the combined imaging helped us to map our hardware placement and during the case this was really useful in helping communicate with the OR staff about the different steps of the procedure. But what I really wanted to avoid was having to yell at the poor circulating nurse about where to scroll the mouse and in what direction. So what we came up with was using a motion capture device called the Leap Motion which uses infrared cameras to map your hand and you can move the images as you would with a mouse. From there the laptop is just connected to the OR display system. For really complex problems like the conjoint twins I mentioned earlier, looking at the images in virtual reality can be really helpful. The ability to see depth with volume rendering images makes it a lot easier to make sense of the anatomy and you're no longer limited to the standard planes of imaging. The software I use is called Medical Holodeck and it's pretty straightforward to view these images either by myself or in collaboration with partners. Newer devices like the Quest 3 or the Apple Vision Pro have video pass through capabilities. So you can overlay these images directly onto the patient for marking or other pre-operative planning. One of the interesting things you can do with segmentations is export them directly from the 3D slicer program as STL files which can then be used for 3D printing. And newer consumer grade printers can print complex shapes using a variety of materials. And with dissolvable support materials, you can limit the amount of post processing or or cleanup work that you have to do after printing. Now, these are just small figures that I make for some of my patients, but what's exciting about this is the possibility to create tailored simulation tools for some of the more complex congenital problems we take care of. So for example, here's a neonatal hemothorax model created from a CT scan. The takeaway from this whole thing is that there are free or inexpensive tools available that allow you to create 3D reconstructions and apply them in a variety of ways. And whether it's printing or VR or just visualizing on the screen, the tools are out there and I'd like to help others take advantage of them. I love the last thing, the last line you said, uh because I want to take advantage of it. Uh I what I love that you did is you triangulated. You you wasn't just a presentation on a single modality, you triangulated I don't know how many were there, six different things that you used together and with people presenting image guided surgery, it's like one thing but the key is bringing them all together. I love this. Um, how much of this you said that you develop the um the hand motion but how much of this do you guys have like a patent on this stuff? Is this a bundle that then you can offer to someone else? You're you're muted yet. There. Okay, better. Um, so the, so this is all, I didn't make any of this stuff. This is just uh, so the the software's free. It's uh it's been around for like 10, 15 years. The the VR thing I didn't make, that's made by a company in Switzerland who are super nice and they and I I I bother them, you know, frequently for features or I can't get something to work. And then uh that motion thing is I think the company got bought up but the the device itself was like like a hundred bucks. And so the, you know, because the whole, the whole point of this thing is I it's not, it's not uh, you know, if you if you go to Ethicon or you go wherever, they're gonna they're gonna say, you know, it's 500 bucks a patient or it's or or per scan or whatever. And so the price point of this stuff is really high and I I treat it more like kind of like Ratatouille, like anybody can cook. You know, it's just all all you need is to know where the buttons are at and you can, you know, I can turn around a volume rendering in a few minutes and have, you know, a little spinning model in the in the OR for me to just sort of reference or or communicate with OR staff and stuff like that. And so, you know, I I think that this is it needs to be approachable for anybody in any country using using any set of hardware as long as it's relatively inexpensive. Great, great work, very exciting. Yeah. Thank you so much. Uh. Thanks for having me.
Click "Show Transcript" to view the full transcription (7566 characters)
Comments