mm / STL units:
(shift-click to spin model)
Layer height (μm):
As part of this project, I wrote a small slicer that renders a mesh into a series of bitmap images (with some help from our general counsel, Martin).
The slicer runs in a browser and is completely client-side. It accepts a STL file and downloads a zip file full of .png images. Try it out above!
Turning a model into a set of slices is surprisingly tricky, especially when the model has multiple nested bodies.
Why is voxelization hard? Voxelization is entirely about deciding whether a given voxel is inside or outside the model. Unfortunately, meshes have no notion of "insideness" or "outsideness"; they're just a bag of triangles floating in space.
We have to enforce a notion of insideness on this herd of unruly triangles, or the situation will be completely hopeless.
To start, we declare that triangles are "front-facing" or "back-facing" based on the order of their vertices:
Why does this matter? Consider the following idealized cross-sections of spheres-within-spheres:
Each pair of spheres has faces at the same locations in space.
However, the first pair of spheres is solid all the way through (and is doubly solid in the center, where both spheres are present); the second pair of spheres has an internal empty cavity.
The only difference between these models is the winding direction of the internal sphere's triangles. The normals for these models look something like this:
With front and back faces established, we use a raycasting algorithm on every pixel in a slice (similar to the strategy used for point-in-polygon testing):
- Walk from outside the model to the target slice depth
- If you encounter a front-facing triangle, increase a counter by one
- If you encounder a back-facing triangle, decrease the counter by one
- When you reach the target slice depth, you are inside the model if your counter is greater than zero
Testing this algorithm on our pairs of spheres produces the correct results: A voxel in the center of the left pair is filled, and a voxel in the center of the right pair is empty.
The main downside of this approach is that incoming meshes need to be clean and water-tight. However, this is a fundamental challenge and not unique to the raycasting algorithm: without a water-tight boundary, the notions of insideness and outsideness begins to lose meaning.
The model is positioned so that the target slice is at the very edge of OpenGL's clipping box, then rendered with three passes (with depth testing turned off):
- In the first pass, the stencil buffer increments on front-facing fragments
- In the second pass, the stencil buffer decrements on back-facing fragments
- In the third pass, we discard any fragments where the stencil buffer is zero
This executes the raycasting algorithm described above, done in parallel and with all of the hard work offloaded to the GPU.
In a CPU implementation, the tricky part would be figuring out which triangles intersect various rays. Here, we get all of that for free!
If you're feeling silly enough to actually try using this on your DLP printer, customize it by editing values in printer.js.