Gravitationally Lensed Black Hole Emission Tomography

@article{Levis2022GravitationallyLB,
  title={Gravitationally Lensed Black Hole Emission Tomography},
  author={Aviad Levis and Pratul P. Srinivasan and Andrew A. Chael and Ren Ng and Katherine L. Bouman},
  journal={ArXiv},
  year={2022},
  volume={abs/2204.03715}
}
Measurements from the Event Horizon Telescope en-abled the visualization of light emission around a black hole for the first time. So far, these measurements have been used to recover a 2D image under the assumption that the emission field is static over the period of acquisition. In this work, we propose BH-NeRF, a novel tomography approach that leverages gravitational lensing to recover the continuous 3D emission field near a black hole. Compared to other 3D reconstruction or tomography… 

Figures from this paper

Strong Lensing Source Reconstruction Using Continuous Neural Fields
From the nature of dark matter to the rate of expansion of our Universe, observations of distant galaxies distorted through strong gravitational lensing have the potential to answer some of the major

References

SHOWING 1-10 OF 42 REFERENCES
Evaluation of New Submillimeter VLBI Sites for the Event Horizon Telescope
The Event Horizon Telescope (EHT) is a very long-baseline interferometer built to image supermassive black holes on event-horizon scales. In this paper, we investigate candidate sites for an expanded
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
TLDR
This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.
DeepGEM: Generalized Expectation-Maximization for Blind Inversion
Typically, inversion algorithms assume that a forward model, which relates a source to its resulting measurements, is known and fixed. Using collected indirect measurements and the forward model, the
Inference of Black Hole Fluid-Dynamics from Sparse Interferometric Measurements
TLDR
An approach to recover the underlying properties of fluid-dynamical processes from sparse measurements by estimating the coefficients of a space-time diffusion equation that dictates the stationary statistics of the dynamical process.
In-the-Wild Single Camera 3D Reconstruction Through Moving Water Surfaces
TLDR
A novel differentiable framework is proposed, which is the first single-camera solution that is capable of simultaneously retrieving the structure of dynamic water surfaces and static underwater scene geometry in the wild, and which integrates ray casting of Snell's law at the refractive interface, multi-view triangulation and specially designed loss functions.
Animatable Neural Radiance Fields for Modeling Dynamic Human Bodies
TLDR
This paper addresses the challenge of reconstructing an animatable human model from a multi-view video by introducing neural blend weight fields to produce the deformation fields and shows that this approach significantly outperforms recent human synthesis methods.
CryoDRGN: Reconstruction of heterogeneous cryo-EM structures using neural networks
TLDR
CryoDRGN, an algorithm that leverages the representation power of deep neural networks to directly reconstruct continuous distributions of 3D density maps and map per-particle heterogeneity of single particle cryo-EM datasets, is presented.
Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video
TLDR
Non-Rigid Neural Radiance Fields (NR-NeRF), a reconstruction and novel view synthesis approach for general non-rigid dynamic scenes, takes RGB images of a dynamic scene as input, and creates a high-quality space-time geometry and appearance representation.
Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar Reconstruction
TLDR
This work combines a scene representation network with a low-dimensional morphable model which provides explicit control over pose and expressions and shows that this learned volumetric representation allows for photorealistic image generation that surpasses the quality of state-of-the-art video-based reenactment methods.
Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes
TLDR
A method to perform novel view and time synthesis of dynamic scenes, requiring only a monocular video with known camera poses as input, is presented, and a new representation that models the dynamic scene as a time-variant continuous function of appearance, geometry, and 3D scene motion is introduced.
...
...