3D Printing For CT Scan Analysis, Space Education

3D Printing & Imaging Education Science

Seth Horowitz is a neuroscientist and assistant research professor in the Department of Ecology and Evolutionary Biology at Brown University as well as a maker and a 3D printing enthusiast. He shares this report on some ways that he has been using his 3D printer, including a new research method.

Three years ago I had an interesting problem – I needed a device for an experiment that could hold a live bat comfortably, but in such a way that it couldn’t bite or move its head around.  In the past, I’d worked with engineers who would make very complex Plexiglas, cage-like devices that worked well, but you had to have several to fit different size (and species) bats.  It could take weeks to make each one and the cost was over a thousand dollars each.

About this time, 3D printer kits were beginning to be talked about on the web and I decided I would try and see if I could use one of these things to custom print live bat holders.  I got a small pilot grant from the NASA Rhode Island Space (the research was relevant to NASA’s interests – bats are beloved subjects for the moving-about-in-the-dark crowd) and purchased a Makerbot Cupcake.

After several months of construction, assembly, disassembly, swearing and reconfiguring, I had my 3D printed bat holder, which used about 50 cents worth of plastic and took all of two hours to print.  But how many bat holders do you actually need?  In trying to figure out what else I could do with my Cupcake, I realized that 3D printing is a new form of data actualization – taking the simplified coded representation of an object and creating that object – a mechanical corollary of going from genes to proteins.  And with the wealth of 3D data around, the possibilities are almost endless.

For the last decade at least, 3D models and their images have been common in science and engineering – CT scans create three dimensional images of skeletons and dense tissues, MRI allows the same in soft tissues.  Digital terrain modeling takes multiple images from different perspectives in orbit to allow reconstructions of planetary and lunar surfaces for 3D flyovers.  But all of these have inherent limitations – individual elements of the images have to go through substantial filtering to allow a clean view of regions of interest, which of course means you filter out interesting things while looking for others.  Overlapping elements blur of finer structures, giving you nice overviews of the outside of your object but lacking internal detail which is not always recoverable just by changing your point of view.  And of course, a major limitation is that these are still images.  No matter how pretty or detailed, they are still constraining information about a complex object into strictly visual information. But when you take these 3D visual representations and convert them back into physical objects, you not only reopen possibilities for examining them visually, but also gaining details from our exquisitely fine sense of form through touch.

Figure 1. CT scan of adult bullfrog showing region of deformity

I found one application by examining data from an old study I did.  Much of my work has focused on auditory development, using bullfrogs as a model.  Bullfrogs are interesting models for human hearing since first, their hearing is very similar to low frequency (<2500 Hz) hearing in humans and second, their brains are more resilient and flexible in some ways than humans.  For example, frogs can actually regenerate their central nervous system after damage, something we wish humans could do to prevent things like noise induced hearing loss.  But they pay a price for this plasticity – they are also much more prone to damage from environmental toxins and conditions.

In 2004 during a frog recording session, one member of the lab spotted and caught an odd adult male bullfrog.  It had only one ear.  It seemed otherwise healthy since frogs are very dependent on hearing for social behavior; this frog was going to have trouble breeding and defending its territory.  We caught it and took a CT scan of it to see if we could determine the extent of its malformation.  CT scans are X-rays taken in a continuous spiral down an area of interest, which allows you to create a 3D model of the bone and dense tissues.  The CT scan of the frog (Figure 1) showed that while its inner ear seemed normal on both sides, it was lacking the eardrum and the small piece of cartilage called the stapes (or stapedium) that connected the external tympanum to the inner ear.

Figure 2. 3d printed model based on CT data

It was not until we found a second frog with the same malformation, that we began to realize that there was something going on here. These two frogs showed no signs of injury, so it was more likely that something happened during development. The CT scan images led us to believe that since the inner ears looked normal, this might be similar to a human condition called aural atresia which can cause malformation of the outer and middle ears but leave the inner ears intact. But now, years later, I decided to examine the images again, this time with the aid of my 3D printer. I took the raw CT files and using the open source program ImageJ, exported the data of one section of the skull as a printable stereolithography file, and created a physical model, enlarged about 25 times (Figure 2).

As soon as I had the model in hand and was able to turn it and handle it, I noticed that there were in fact asymmetries in regions where the auditory (8th) nerve left the inner ear to connect to the brain, suggesting that this malformation was not similar to aural atresia. Rather, it was likely due to exposure to insecticides that changed into teratogens in the presence of UV light and could cause more extensive abnormalities at certain points in development. The 3D printed model ended up giving greater insight into what caused the abnormality than did the original images observed on the computer. Creating a physical printable model lets you use the tools you have evolved to use together – your hands and eyes – to expand findings beyond even expensive hardware and software.

Another interest of mine is space education and outreach, and I wanted to apply 3D printing to this as well. Exploration of worlds (including the Earth) is one of the most exciting human adventures of the 20th and 21st centuries, and yet the excitement almost exclusively comes from imagery. Mass and salinity globes of the Earth, 3D fly-throughs of canyons on Mars and glacial cracks on Jupiter’s moons Europa, high definition views of lunar craters – with few exceptions all of these and more are available only visually. Physical models, such as custom-made limited editions of asteroid shapes, cost thousands of dollars. Textured globes and maps that allow someone to feel mountain ridges and landmass shapes have been around for over a century, originally developed for the blind, but are only available for common teaching tools such as earth globes. So how can you bring space and earth science education to the 37 million people in the world who are totally blind, not to mention the 124 million who are nearly so? And beyond that, how much more would sighted people get out of being able to physically handle a model of an asteroid?

In 2010, I started seeking out 3D data of the shapes of asteroids to see if it would be possible to print 3D models of space bodies and terrains. I found that there was a wealth of asteroid shapes derived from RADAR data (in large part by Professor Scott Hudson of the school of electrical engineering at Washington State University), as well as Martian digital terrain data from the University of Arizona’s HiRISE group, some of which was already being used in space simulation programs like Celestia. I began taking these NASA-based data and (after significant work) converting them into stereolithography formats and printing physical models of asteroids, the Martian moons Phobos and Deimos, and even planetary features such as the Martian crater Gusev (Figure 3).

Figure 3. Small space bodies from images (above) and 3D printed versions (below).

But to show how the pace of online software feeds into new ideas in education and making, I was able to scoop NASA in making a model of the asteroid Vesta. Vesta is the second most massive asteroid in the main belt and is very different from most other asteroids and space bodies. I particularly wanted a model of Vesta to compare with other “potato-shaped” asteroids such as Eros because it would mean someone would get an immediate visceral (or at least haptic) grasp of the difference in shape that emerges based on the principle of gravity-induced differentiation, from rubble pile to almost planet.

Vesta is currently being orbited by the Dawn probe which is sending back thousands of beautiful images, NASA had not yet released the “official” 3d shape model. But I found two ways around this – first, by taking the images that showed the rotation of Vesta and feeding them onto an online free 3d modeling program (www.my3dscanner.com) I was able to get a basic point cloud, a shape based on correlations between similar light and dark points between successive images. Using that for some of the details, I combined that with the released “global map” of Vesta and mapped it onto a flattened ovoid, derived from the shape of some of the orbital images. This let me create a somewhat low-res but accurate 3D model even before the official release (Figure 4).

Figure 4. The asteroid Vesta – image from the Dawn probe at left and my 3D printed version on the right.

This story isn’t about being able to scoop NASA – it’s about demonstrating that the wealth of tools and free data out there can empower the interested. Going from images to 3D model to printed object lets you create your own scale models of the universe. Generate a curriculum that will let the blind feel the mid-Atlantic ridge and be able to tell the difference between a stark, sharp lunar crater and a weather-eroded Martian one. And at a professional level, create accurate printed models of terrains to test roving or sample gathering vehicles to help us continue our exploration, including a wider audience and motivating new generations of students, sighted and not, to realize they can hold models of the universe in their own hands.

Seth Horowitz

10 thoughts on “3D Printing For CT Scan Analysis, Space Education

  1. 3D Printing For CT Scan Analysis, Space Education « 3d printing event says:

    […] read more…. […]

  2. hairykiwi says:

    Very interesting Seth! I was thinking just the other day how I might go about printing solids from CT scans. Now I know where to start – ImageJ looks like a very useful tool.

    Many thanks,
    Hamish

  3. Seth S. Horowitz (@SethSHorowitz) says:

    Be sure to get the ImageJ 3D Viewer plugin and use RGB images, then only select one color slice to make the model. If you choose surfaces it allows you to export in .obj format, but beware – it can be very memory intensive for large format image sources. I usually drop the size a bit.

    1. hairykiwi says:

      Many thanks again for the extra tips – I intend having a play quite soon.

  4. hairykiwi says:

    I finally succeeded at printing some CT scan data on a RepRap 3D printer and thought it might be useful to share my experience and workflow:

    CT scan data (DICOM format) converted to STL mesh using free Invesalius v3.0b1
    Note: ImageJ, Slicer and InVesalius v3.0b3 (DICOM -> STL tools) all crashed while trying to generate the mesh. Only InVesalius 3.0b1 managed it. Possibly something to do with the lower relative density of the synovium (which surrounds the top of the joint I believe) to the bone, resulting in very poor manifoldness in the STL. Either that, or the shear complexity of the fracture site.

    STL mesh was then cleaned up using free MeshMixer v8. In particular, the pull (to reference geometry) tool was very efficient and useful.
    Total project time: A week of experimenting/learning, 4 – 6 hours real productiveness, 3 hours printing at a slow speed 40mm/s with single wall and 15% infill.

    3D print shown in images was sliced using Slic3r 0.9.5 (0.2mm layer height) and printed on a Prusa Mendel.

    Images and files: http://bit.ly/r_frac_tibial_lat_cond (Model first printed: 2012-11-17)

    All files at above location licensed: CC BY-SA 3.0 (BY: Hamish Mead)

Comments are closed.

Discuss this article with the rest of the community on our Discord server!
Tagged

Paul Spinrad is a broad-spectrum enthusiast, writer, maker, and dad who lives in San Francisco. He hatches schemes at http://investian.com.

View more articles by Paul Spinrad

ADVERTISEMENT

Maker Faire Bay Area 2023 - Mare Island, CA

Escape to an island of imagination + innovation as Maker Faire Bay Area returns for its 15th iteration!

Buy Tickets today! SAVE 15% and lock-in your preferred date(s).

FEEDBACK