Abstract

This paper introduces a perceptual model for determining 3D printing orientations. Additive manufacturing methods involving low-cost 3D printers often require robust branching support structures to prevent material collapse at overhangs. Although the designed shape can successfully be made by adding supports, residual material remains at the contact points after the supports have been removed, resulting in unsightly surface artifacts. Moreover, fine surface details on the fabricated model can easily be damaged while removing supports. To prevent the visual impact of these artifacts, we present a method to find printing directions that avoid placing supports in perceptually significant regions. Our model for preference in 3D printing direction is formulated as a combination of metrics including area of support, visual saliency, preferred viewpoint and smoothness preservation. We develop a training-and-learning methodology to obtain a closed-form solution for our perceptual model and perform a large-scale study. We demonstrate the performance of this perceptual model on both natural and man-made objects.

Acknowledgements

We thank the anonymous reviewers for their comments, and the authors of [Secord et al. 2011] for clarifications. Models provided courtesy of the AIM@SHAPE Shape Repository: Bimda, kitten, dancing children, gargoyle, Max-Planck, armadillo, cow, bunny, octopus, egea, sheep, duck. Remaining models provided by TF3DM and Thingiverse. This material is based upon work partially supported by the HKSAR RGC General Research Fund (GRF) CUHK/14207414, and the National Science Foundation under Grant No. 1464267.

Files

Bibtex

@article{zhang:sa:2015,
 author = {Zhang, Xiaoting and Le, Xinyi and Panotopoulou, Athina and Whiting, Emily and Wang, Charlie C. L.},
 title = {Perceptual Models of Preference in 3D Printing Direction},
 journal = {ACM Trans. Graph.},
 volume = {34},
 number = {6},
 year = {2015},
 pages = {215:1--215:12},
 publisher = {ACM},
}