Philipp Grohs

Learn More
In recent work nonlinear subdivision schemes which operate on manifold-valued data have been successfully analyzed with the aid of so-called proximity conditions bounding the difference between a linear scheme and the nonlinear one. The main difficulty with this method is the verification of these conditions. In the present paper we obtain a very clear(More)
Geometric wavelet-like transforms for univariate and multivariate manifold-valued data can be constructed by means of nonlinear stationary subdivision rules which are intrinsic to the geometry under consideration. We show that in an appropriate vector bundle setting for a general class of interpolatory wavelet transforms, which applies to Riemannian(More)
Subdivision is a powerful way of approximating a continuous object f(x, y) by a sequence ((Spi,j)i,j∈Z)l∈N of discrete data on finer and finer grids. The rule S, that maps an approximation on a coarse grid, Slp, to the approximation on the next finer grid, Sl+1p, is called subdivision scheme. If for a given scheme S every continuous object f(x, y)(More)
We prove optimal bounds for the discretization error of geodesic finite elements for variational partial differential equations for functions that map into a nonlinear space. For this we first generalize the well-known Céa lemma to nonlinear function spaces. In a second step we prove optimal interpolation error estimates for pointwise interpolation by(More)
Linear stationary subdivision rules take a sequence of input data and produce ever denser sequences of subdivided data from it. They are employed in multiresolution modeling and have intimate connections with wavelet and more general pyramid transforms. Data which naturally do not live in a vector space, but in a nonlinear geometry like a surface, symmetric(More)
We study the following modification of a linear subdivision scheme S: Let M be a surface embedded in Euclidean space, and P a smooth projection mapping onto M . Then the P -projection analogue of S is defined as T := P ◦ S. As it turns out, the smoothness of the scheme T is always at least as high as the smoothness of the underlying scheme S or the(More)
Wiatowski and Bölcskei, 2015, proved that deformation stability and vertical translation invariance of deep convolutional neural network-based feature extractors are guaranteed by the network structure per se rather than the specific convolution kernels and non-linearities. While the translation invariance result applies to square-integrable(More)