Charles Ruizhongtai Qi

Learn More
Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs(More)
3D shape models are becoming widely available and easier to capture, making available 3D information crucial for progress in object classification. Current state-of-the-art methods rely on CNNs to address this problem. Recently, we witness two types of CNNs being developed: CNNs based upon volumetric representations versus CNNs based upon multi-view(More)
Exogenous delivery of the neurotrophin-3 (NT-3) gene may provide a potential therapeutic strategy for ischemic stroke. To investigate the neuroprotective effects of NT-3 expression controlled by 5HRE after focal cerebral ischemia, we constructed a recombinant retrovirus vector (RV) with five copies of hypoxia-responsive elements (5HRE or 5H) and NT-3 and(More)
Both 3D models and 2D images contain a wealth of information about everyday objects in our environment. However, it is difficult to semantically link together these two media forms, even when they feature identical or very similar objects. We propose a <i>joint</i> embedding space populated by both 3D shapes and 2D images of objects, where the distances(More)
Pretreatment with estrogen has been shown to increase subventricular zone (SVZ) neurogenesis and improve neurological outcome after cerebral ischemia reperfusion injury in mice. However, the potential of post-stroke estrogen administration to enhance neurogenesis is largely unknown. In this study, we explored whether post-stroke estradiol administration had(More)
Building discriminative representations for 3D data has been an important task in computer graphics and computer vision research. Convolutional Neural Networks (CNNs) have shown to operate on 2D images with great success for a variety of tasks. Lifting convolution operators to 3D (3DCNNs) seems like a plausible and promising next step. Unfortunately, the(More)
Our method completes a partial 3D scan using a 3D Encoder-Predictor network that leverages semantic features from a 3D classification network. The predictions are correlated with a shape database, which we use in a multi-resolution 3D shape synthesis step. We obtain completed high-resolution meshes that are inferred from partial, low-resolution input scans.(More)
1. Details on Model Training Training for Our Volumetric CNNs To produce occupancy grids from meshes, the faces of a mesh are subdivided until the length of the longest edge is within a single voxel; then all voxels that intersect with a face are marked as occupied. For 3D resolution 10,30 and 60 we generate voxelizations with central regions 10, 24, 54 and(More)
Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds and well(More)