Streetscape augmentation using generative adversarial networks: insights related to health and wellbeing

@article{Wijnands2019StreetscapeAU,
  title={Streetscape augmentation using generative adversarial networks: insights related to health and wellbeing},
  author={Jasper S. Wijnands and Kerry A. Nice and Jason Thompson and Haifeng Zhao and Mark R. Stevenson},
  journal={ArXiv},
  year={2019},
  volume={abs/1905.06464}
}
Deep learning using neural networks has provided advances in image style transfer, merging the content of one image (e.g., a photo) with the style of another (e.g., a painting). Our research shows this concept can be extended to analyse the design of streetscapes in relation to health and wellbeing outcomes. An Australian population health survey (n=34,000) was used to identify the spatial distribution of health and wellbeing outcomes, including general health and social capital. For each… Expand
Using machine learning to examine associations between the built environment and physical function: A feasibility study.
TLDR
This study examined the feasibility of Generative Adversarial Networks - machine learning - to measure neighbourhood design using 'street view' and aerial imagery to explore the relationship between the built environment and physical function and found that aerial imagery failed to produce meaningful results. Expand
Modeling and interpreting road geometry from a driver's perspective using variational autoencoders
TLDR
This research advances the understanding of road design by considering the driver’s perception by proposing a new methodology based on variational autoencoders (VAE) to derive low-dimensional and exploitable parameters of the perspective road geometry. Expand
Classifying Street Spaces with Street View Images for a Spatial Indicator of Urban Functions
TLDR
A rule-based clustering method is devised to support the empirically generated classification of street spaces based on features extracted from street view images by a deep learning model of computer vision to demonstrate its validity. Expand
The “Paris-End” of Town? Deriving Urban Typologies Using Three Imagery Types
TLDR
This work uses neural networks to analyse millions of images of urban form (consisting of street view, satellite imagery, and street maps) to find shared characteristics between the largest 1692 cities in the world and shows specific disadvantages of each type of imagery in constructing urban typologies. Expand
Urban neighbourhood environment assessment based on street view image processing: A review of research trends
  • Nan He, Guanghao Li
  • Geography
  • 2021
Abstract The urban neighbourhood is one of the most important places for public activities and behaviour spaces in cities, and the quantification of their environments is receiving increasingExpand
Street view imagery in urban analytics and GIS: A review
Abstract Street view imagery has rapidly ascended as an important data source for geospatial data collection and urban analytics, deriving insights and supporting informed decisions. Such surge hasExpand

References

SHOWING 1-10 OF 101 REFERENCES
High-Resolution Deep Convolutional Generative Adversarial Networks
TLDR
A new layered network, HDCGAN, that incorporates current state-of-the-art techniques for network convergence of DCGAN (Deep Convolutional Generative Adversarial Networks) and achieves good-looking high-resolution results is proposed. Expand
Image-to-Image Translation with Conditional Adversarial Networks
TLDR
Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Expand
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
  • C. Ledig, Lucas Theis, +6 authors W. Shi
  • Computer Science, Mathematics
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
TLDR
SRGAN, a generative adversarial network (GAN) for image super-resolution (SR), is presented, to its knowledge, the first framework capable of inferring photo-realistic natural images for 4x upscaling factors and a perceptual loss function which consists of an adversarial loss and a content loss. Expand
Semantic Image Inpainting with Deep Generative Models
TLDR
A novel method for semantic image inpainting, which generates the missing content by conditioning on the available data, and successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods. Expand
Few-Shot Unsupervised Image-to-Image Translation
  • Ming-Yu Liu, Xun Huang, +4 authors J. Kautz
  • Computer Science, Mathematics
  • 2019 IEEE/CVF International Conference on Computer Vision (ICCV)
  • 2019
TLDR
This model achieves this few-shot generation capability by coupling an adversarial training scheme with a novel network design, and verifies the effectiveness of the proposed framework through extensive experimental validation and comparisons to several baseline methods on benchmark datasets. Expand
Context Encoders: Feature Learning by Inpainting
TLDR
It is found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures, and can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods. Expand
Deep Learning the City: Quantifying Urban Perception at a Global Scale
TLDR
A new crowdsourced dataset containing 110,988 images from 56 cities, and 1,170,000 pairwise comparisons provided by 81,630 online volunteers along six perceptual attributes are introduced, showing that crowdsourcing combined with neural networks can produce urban perception data at the global scale. Expand
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and aExpand
Toward Multimodal Image-to-Image Translation
TLDR
This work aims to model a distribution of possible outputs in a conditional generative modeling setting that helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse. Expand
Unsupervised Image to Image Translation
Unsupervised image-to-image translation methods have received a lot of attention in the last few years. Multiple techniques emerged tackling the initial challenge from different perspectives. SomeExpand
...
1
2
3
4
5
...