Mike-3d-print-skull
This skull with a broken nose of Make: executive editor Mike Senese was printed by Hawk Ridge Systems using a 3D Systems ProJet 660.

 

For more on 3D printing, check out Make: Volume 42. Don't have this issue? Get it in the Maker Shed.
For more on 3D printing, check out Make: Volume 42.
Don’t have this issue? Get it in the Maker Shed.

3D printing is all around us, opening possibilities for us to do in our garages what traditionally could only be done by large organizations. It’s now possible to 3D-print a model of your own bones, innards, and other anatomical structures starting from a CT scan 3D image, and using only open source software tools. I’ll show you how to do it using a couple of common desktop 3D printers; if you don’t have access to a 3D printer, check out makezine.com/where-to-get-digital-fabrication-tool-access to find a machine or service near you.

The Data Is Yours

So you broke your arm, got send to the emergency room, and while you were there the doctor recommended to acquire a CT scan for good measure. While you wait in the emergency room, it occurs to you that it will be interesting to see a 3D printed model of your broken bone, so you kindly ask the nurse for a copy of the CT Scan.

In the United States, you have the right to this data and the health care provider is required to give it to you within 30 days. They can charge you a reasonable fee for the process of reproduction and mailing.

U.S. laws give patients the right to access their own personal records. More specifically:

The Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule gives you, with few exceptions, the right to inspect, review, and receive a copy of your medical records and billing records that are held by health plans and health care providers covered by the Privacy Rule. If you want a copy, you may need to pay for copies and mailing.

or if you are in the state of New York, the following rules apply.

In the case of a CT scan, you want to ask for the digital data, not printouts in film. This will typically be delivered to you in a DVD containing files that are encoded using the DICOM format.

Image Processing Software

Probably the most commonly used Open Source software applications for processing medical images are OsiriX and 3D Slicer (not to be confused with Slic3r, the G-code generator for 3D printers).

Both of these applications are open source. Both OsiriX and Slicer are built upon the open source toolkits ITK and VTK. However, OsiriX is only available in Mac, while Slicer is available for Windows, Mac, and Linux, which is the reason we’ll focus on it for the rest of this article.

Let’s start by getting Slicer from its download page.

Project Steps

Read the DICOM images

The images of a 3D scan are typically saved as individual slices, using one file per slice. To enable you to reproduce the demonstration in this article, we use a DICOM dataset that’s publicly available, from the OsiriX sample images page. We chose the PELVIX dataset, that contains a fractured pelvis and part of the adjacent femur bones.

Download the images from the OsiriX page and extract the content of the zip file into a directory.

To load the images, launch Slicer and go to the File menu in the top bar, then select DICOM. This opens a new dialog window, exposing the options for loading DICOM images. In this new window, select from the top menu the Import button.

Read the DICOM images (cont’d)

Then select the folder where your images are stored and click the Import button on the lower right corner of the dialog. A progress bar will indicate the loading of the images, and a new window will appear, showing the organized content of the input data files.

Finally, click the Load Selection into Slider button to load the full set of images into memory. It will be displayed in the 4-quadrant window. Three of the quadrants show the X, Y, and Z slice cuts respectively, while the fourth one shows a 3D view of the dataset.

Segment the bones

The process of extracting an anatomical structure from a 3D volumetric image is called “image segmentation”. The 3D Slicer application provides a set of segmentation tools, based on the Insight Toolkit ITK, an open source library for image analysis sponsored by the U.S. National Library of Medicine.

Probably the easiest segmentation tool to use in 3D Slicer is the “region growing” segmentation tool. You select “seed points” on the image, and from those seed points the tool connects pixels that have intensity values very similar to the points you chose.

Click on the Modules drop-down menu, and select the Segmentation group. You want the Simple Region Growing Segmentation tool.

Under the Segmentation Parameters, select Seeds–>Create new MarkupsFiducial. Now select your seed points. (What does MarkupsFiducial mean? The seed points you’re selecting as input to the region growing method are provided to Slicer through the “Fiducial” markers interface. Don’t worry about it.)

Segment the bones (cont'd)

To create a new Volume for saving the results of the segmentation, select IO–>Output Volume–>Create new volume.

Finally, execute the segmentation module by clicking on the Apply button.

The resulting segmented structure will appear as a label map, in a separate color. You can see it by selecting the 3D View Controllers menu.

Generate a surface mesh

Up to this point, the segmentation is still a volumetric image, made of 3D pixels (or voxels). We need now to extract the surface around it, in the form of a mesh composed of points and triangles connecting them.

Go back up to the Modules list and select the Model Maker module. Now point to the label volume from which you want to create the surface.

Fun fact: The functionalities of visualization and surface extraction in 3D Slicer are provided by the Visualization Toolkit (VTK), and open source C++ library for 3D computer graphics, image processing, and visualization.

Save the surface in STL format

You can now save the resulting mesh surface into an STL file, which is what you’ll use as input for 3D printing. To save the surface, select File–>Save (or use the Ctrl+S shortcut). This prompts a detailed menu with all the pieces of data that can be saved at this point. You’re only concerned with saving the surface, so first unselect all the rows, then select only the row with the filename FemurBone.

In the second column, click on the dropdown menu to select the STL file format. Finally, click the Save button on the lower-right corner of this window.

NOTE: The third column indicates the directory where the file will be saved. If you wish, you can change that directory by clicking on it, before saving the file.

Inspect your mesh

As a way of inspecting the quality of the resulting surface mesh, load the same STL file into Meshlab (also open source) to make sure that the mesh is properly constructed.

Refining

Image segmentation is a mixture of art, science, and a bit of black magic (read “parameter tuning”). Our segmentation process was not perfect, since we obviously didn’t get the full femur bone, nor the hip bone.

To correct this, go back to the Region Growing module and increase the Multiplier parameter. This number tells the software to accept into the new region those pixels whose intensity values are within a range of x standard deviations away from the intensity values of your seed points, where x is that multiplier factor. The larger this multiplier is, the bigger will be the extracted region, since it will include pixels that have lesser similarity with the seed points.

By using a Multiplier value of 3.0 we get the second image here. Much better.

Here’s why: Bones are magnificent mechanical structures. They continuously remodels themselves in order to provide support in the specific locations where mechanical stress is being applied. This is elegantly done by cells that are sensitive to mechanical pressure, and that are also involved in the process of producing the calcium crystals that give rigidity to the bone. Curiously, we can think of bone as a biological structure that is continuously 3D-printing itself from the inside. Because this adaptability, bone structures have different levels of calcification in different locations, and correspondingly, they appear brighter or darker in the CT images.

When we increase our multiplier factor, what we are doing is accepting regions of the bone that are less and less calcified.

With a 3.5 multiplier value, we get the entire femur attached to the pelvis.

You can get a better view of the model by using the 3D Only view mode, as seen in the third image here.

This time we save that file as FemurBoneAndHip.stl and we import it into Meshlab for verification.

Note that the hip here is fractured; that is, after all, the reason why we got to the emergency room in the first place. The rupture is not an artifact of the segmentation, but the actual state of the bone in the dataset.

Post-processing

Now that you have the STL model of the structure, we’ll clean it up in ParaView, an open-source, multi-platform data analysis and visualization application that’s also based on VTK.

Load the STL file by selecting File–>Open, and then clicking on the Apply button. ParaView won’t apply operations until you click Apply. This is intended to prevent long waits when dealing with very large datasets, and reflect the fact that ParaView is very commonly used in computer clusters and supercomputer applications, typically to inspect the output of large-scale simulations.

When extracting surfaces from 3D images, the resulting meshes tend to have a very large number of triangles. It’s usually a good idea to decimate the surface to replace many of these triangles with larger ones, while still preserving the general shape of the object.

In ParaView, select the Decimate filter and set your Target Reduction to 0.5. The software will attempt to reduce the number of triangles by 50%.

Now that you have a less dense mesh, it’s time to rotate your 3D model to put it in a more convenient position on the 3D printer bed. Typically, it’s useful to lay the model as flat as possible on the bed surface. This cannot be done perfectly with a bone structure, but we can still make the job of the 3D printer a bit easier by rotating the structure.

To perform this transformation, select the Transform filter in ParaView, and put the bone “on its side” to make it lie better on the printer bed. You also need to apply a scale factor, to produce a smaller model for the printer. Remember that your 3D model comes from a real CT scan dataset and therefore is has the typical size of a human pelvis — which is a bit too large for the volume of the 3D printer, and certainly too big for a first printing test.

Finally, save the new, decimated, rotated, and scaled surface into a new file, by selecting File–>Save Data, providing a filename, and selecting the STL file format.

Printing — with Printrbot Simple

You can now take this STL file and load it in RepetierHost, proceed to slice it with Slic3r to generate G-code, and give it a try at printing it.

At this point, we are fully in 3D printing territory, where selections such as supports, “rafts,” and infill will be critical for properly printing the model. To print the bones on our Printrbot Simple, we enabled both supports and raft, in order to provide a base to hold the pieces of the fractured pelvis along with the top part of the femur bone.

After about 1 hour of printing, you can see the result of the model with 0.3mm layers in PLA plastic (second image here) and after removing the supports we get our bone model (third image).

Printing — with Makerbot Replicator 2X

We loaded the same STL file into MakerWare, to print it with the Makerbot Replicator 2X. Again, enable supports and a raft.

Here’s the resulting model, printed in natural-color ABS. Note that one of the supports broke off during the printing process due to the raft lifting a bit from the printer bed.

After we remove the supports and the raft, we get the final model (third image).

Share your data

And since you’ve made it this far, why not share your data? DICOM images can be publicly shared in the MIDAS platform, along with the segmentations and the STL files containing the mesh surfaces.

Here’s the shared folder with the STL files resulting from our image segmentation and post-processing.

I hope you enjoy replicating this process with your favorite medical datasets and your favorite 3D printer!