GlobalMapper: Arbitrary-Shaped Urban Layout Generation
ICCV 2023

Abstract

Modeling and designing urban building layouts is of significant interest in computer vision, computer graphics, and urban applications. A building layout consists of a set of buildings in city blocks defined by a network of roads. We observe that building layouts are discrete structures, consisting of multiple rows of buildings of various shapes, and are amenable to skeletonization for mapping arbitrary city block shapes to a canonical form. Hence, we propose a fully automatic approach to building layout generation using graph attention networks. Our method generates realistic urban layouts given arbitrary road networks, and enables conditional generation based on learned priors. Our results, including user study, demonstrate superior performance as compared to prior layout generation networks, support arbitrary city block and varying building shapes as demonstrated by generating layouts for 28 large cities.

Method

We use a graph attention network (GAT) as the backbone of our encoder and decoder. Graph attention networks perform weighted multi-layer messaging passing between connected nodes. In particular, the edges of our 2D grid graph topology enable message passing between buildings next-to and in-back-of other buildings.


overview

Comparisons to SOTA

overview

Generation with <5% of Input Block Priors (Red)

overview

Weather Research & Forecasting Model (WRF) Simulation Results

overview

Large-Scale Generation in Vancouver (Green Contour Marks Real Data)

overview

BibTeX


                    @inproceedings{he2023globalmapper,
                        title={GlobalMapper: Arbitrary-Shaped Urban Layout Generation},
                        author={He, Liu and Aliaga, Daniel},
                        booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
                        pages={454--464},
                        year={2023}
                    }
              

Acknowledgements

Thanks Haoteng Yin, and Zhiquan Wang for their valuable suggestions. Thanks Harsh Kamath for providing height dataset.

The website template was borrowed from Michaƫl Gharbi.