Seth Horowitz is a neuroscientist and assistant research professor in the Department of Ecology and Evolutionary Biology at Brown University as well as a maker and a 3D printing enthusiast. He shares this report on some ways that he has been using his 3D printer, including a new research method.
Three years ago I had an interesting problem – I needed a device for an experiment that could hold a live bat comfortably, but in such a way that it couldn’t bite or move its head around. In the past, I’d worked with engineers who would make very complex Plexiglas, cage-like devices that worked well, but you had to have several to fit different size (and species) bats. It could take weeks to make each one and the cost was over a thousand dollars each.
About this time, 3D printer kits were beginning to be talked about on the web and I decided I would try and see if I could use one of these things to custom print live bat holders. I got a small pilot grant from the NASA Rhode Island Space (the research was relevant to NASA’s interests – bats are beloved subjects for the moving-about-in-the-dark crowd) and purchased a Makerbot Cupcake.
After several months of construction, assembly, disassembly, swearing and reconfiguring, I had my 3D printed bat holder, which used about 50 cents worth of plastic and took all of two hours to print. But how many bat holders do you actually need? In trying to figure out what else I could do with my Cupcake, I realized that 3D printing is a new form of data actualization – taking the simplified coded representation of an object and creating that object – a mechanical corollary of going from genes to proteins. And with the wealth of 3D data around, the possibilities are almost endless.
For the last decade at least, 3D models and their images have been common in science and engineering – CT scans create three dimensional images of skeletons and dense tissues, MRI allows the same in soft tissues. Digital terrain modeling takes multiple images from different perspectives in orbit to allow reconstructions of planetary and lunar surfaces for 3D flyovers. But all of these have inherent limitations – individual elements of the images have to go through substantial filtering to allow a clean view of regions of interest, which of course means you filter out interesting things while looking for others. Overlapping elements blur of finer structures, giving you nice overviews of the outside of your object but lacking internal detail which is not always recoverable just by changing your point of view. And of course, a major limitation is that these are still images. No matter how pretty or detailed, they are still constraining information about a complex object into strictly visual information. But when you take these 3D visual representations and convert them back into physical objects, you not only reopen possibilities for examining them visually, but also gaining details from our exquisitely fine sense of form through touch.
I found one application by examining data from an old study I did. Much of my work has focused on auditory development, using bullfrogs as a model. Bullfrogs are interesting models for human hearing since first, their hearing is very similar to low frequency (<2500 Hz) hearing in humans and second, their brains are more resilient and flexible in some ways than humans. For example, frogs can actually regenerate their central nervous system after damage, something we wish humans could do to prevent things like noise induced hearing loss. But they pay a price for this plasticity – they are also much more prone to damage from environmental toxins and conditions.
In 2004 during a frog recording session, one member of the lab spotted and caught an odd adult male bullfrog. It had only one ear. It seemed otherwise healthy since frogs are very dependent on hearing for social behavior; this frog was going to have trouble breeding and defending its territory. We caught it and took a CT scan of it to see if we could determine the extent of its malformation. CT scans are X-rays taken in a continuous spiral down an area of interest, which allows you to create a 3D model of the bone and dense tissues. The CT scan of the frog (Figure 1) showed that while its inner ear seemed normal on both sides, it was lacking the eardrum and the small piece of cartilage called the stapes (or stapedium) that connected the external tympanum to the inner ear.
It was not until we found a second frog with the same malformation, that we began to realize that there was something going on here. These two frogs showed no signs of injury, so it was more likely that something happened during development. The CT scan images led us to believe that since the inner ears looked normal, this might be similar to a human condition called aural atresia which can cause malformation of the outer and middle ears but leave the inner ears intact. But now, years later, I decided to examine the images again, this time with the aid of my 3D printer. I took the raw CT files and using the open source program ImageJ, exported the data of one section of the skull as a printable stereolithography file, and created a physical model, enlarged about 25 times (Figure 2).
As soon as I had the model in hand and was able to turn it and handle it, I noticed that there were in fact asymmetries in regions where the auditory (8th) nerve left the inner ear to connect to the brain, suggesting that this malformation was not similar to aural atresia. Rather, it was likely due to exposure to insecticides that changed into teratogens in the presence of UV light and could cause more extensive abnormalities at certain points in development. The 3D printed model ended up giving greater insight into what caused the abnormality than did the original images observed on the computer. Creating a physical printable model lets you use the tools you have evolved to use together – your hands and eyes – to expand findings beyond even expensive hardware and software.
Another interest of mine is space education and outreach, and I wanted to apply 3D printing to this as well. Exploration of worlds (including the Earth) is one of the most exciting human adventures of the 20th and 21st centuries, and yet the excitement almost exclusively comes from imagery. Mass and salinity globes of the Earth, 3D fly-throughs of canyons on Mars and glacial cracks on Jupiter’s moons Europa, high definition views of lunar craters – with few exceptions all of these and more are available only visually. Physical models, such as custom-made limited editions of asteroid shapes, cost thousands of dollars. Textured globes and maps that allow someone to feel mountain ridges and landmass shapes have been around for over a century, originally developed for the blind, but are only available for common teaching tools such as earth globes. So how can you bring space and earth science education to the 37 million people in the world who are totally blind, not to mention the 124 million who are nearly so? And beyond that, how much more would sighted people get out of being able to physically handle a model of an asteroid?
In 2010, I started seeking out 3D data of the shapes of asteroids to see if it would be possible to print 3D models of space bodies and terrains. I found that there was a wealth of asteroid shapes derived from RADAR data (in large part by Professor Scott Hudson of the school of electrical engineering at Washington State University), as well as Martian digital terrain data from the University of Arizona’s HiRISE group, some of which was already being used in space simulation programs like Celestia. I began taking these NASA-based data and (after significant work) converting them into stereolithography formats and printing physical models of asteroids, the Martian moons Phobos and Deimos, and even planetary features such as the Martian crater Gusev (Figure 3).
But to show how the pace of online software feeds into new ideas in education and making, I was able to scoop NASA in making a model of the asteroid Vesta. Vesta is the second most massive asteroid in the main belt and is very different from most other asteroids and space bodies. I particularly wanted a model of Vesta to compare with other “potato-shaped” asteroids such as Eros because it would mean someone would get an immediate visceral (or at least haptic) grasp of the difference in shape that emerges based on the principle of gravity-induced differentiation, from rubble pile to almost planet.
Vesta is currently being orbited by the Dawn probe which is sending back thousands of beautiful images, NASA had not yet released the “official” 3d shape model. But I found two ways around this – first, by taking the images that showed the rotation of Vesta and feeding them onto an online free 3d modeling program (www.my3dscanner.com) I was able to get a basic point cloud, a shape based on correlations between similar light and dark points between successive images. Using that for some of the details, I combined that with the released “global map” of Vesta and mapped it onto a flattened ovoid, derived from the shape of some of the orbital images. This let me create a somewhat low-res but accurate 3D model even before the official release (Figure 4).
This story isn’t about being able to scoop NASA – it’s about demonstrating that the wealth of tools and free data out there can empower the interested. Going from images to 3D model to printed object lets you create your own scale models of the universe. Generate a curriculum that will let the blind feel the mid-Atlantic ridge and be able to tell the difference between a stark, sharp lunar crater and a weather-eroded Martian one. And at a professional level, create accurate printed models of terrains to test roving or sample gathering vehicles to help us continue our exploration, including a wider audience and motivating new generations of students, sighted and not, to realize they can hold models of the universe in their own hands.
ADVERTISEMENT