Check the preview of 2nd version of this platform being developed by the open MLCommons taskforce on automation and reproducibility as a free, open-source and technology-agnostic on-prem platform.

Global-to-Local Generative Model for 3D Shapes

lib:08e77721e808808d (v1.0.0)

Vote to reproduce this paper and share portable workflows   1 
Authors: Hao Wang,Nadav Schor,Ruizhen Hu,Haibin Huang,Daniel Cohen-Or,Hui Huang
Where published: ACM Transactions on Graphics (Proc. SIGGRAPH ASIA) 2019 1
Document:  PDF  DOI 
Artifact development version: GitHub
Abstract URL: https://dl.acm.org/citation.cfm?id=3275025


We introduce a generative model for 3D man-made shapes. The presented method takes a global-to-local (G2L) approach. An adversarial network (GAN) is built first to construct the overall structure of the shape, segmented and labeled into parts. A novel conditional auto-encoder (AE) is then augmented to act as a part-level refiner. The GAN, associated with additional local discriminators and quality losses, synthesizes a voxel-based model, and assigns the voxels with part labels that are represented in separate channels. The AE is trained to amend the initial synthesis of the parts, yielding more plausible part geometries. We also introduce new means to measure and evaluate the performance of an adversarial generative model. We demonstrate that our global-to-local generative model produces significantly better results than a plain three-dimensional GAN, in terms of both their shape variety and the distribution with respect to the training data.

Relevant initiatives  

Related knowledge about this paper Reproduced results (crowd-benchmarking and competitions) Artifact and reproducibility checklists Common formats for research projects and shared artifacts Reproducibility initiatives

Comments  

Please log in to add your comments!
If you notice any inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!