Learn More
This supplementary material provides details on the features used in segmen-tation, structure class prediction and support prediction. Segmentation Feature Descriptions Dims Boundary 8 B1. Strength: average Pb value 1 B2. Length: perimeter of each region; (boundary length) / (smaller perimeter) 3 B3. Smoothness: length / (L1 endpoint distance) 1 B4.(More)
In this paper we explore how a structured light depth sensor, in the form of the Microsoft Kinect, can assist with indoor scene segmentation. We use a CRF-based model to evaluate a range of different representations for depth information and propose a novel prior on 3D location. We introduce a new and challenging indoor scene dataset, complete with accurate(More)
Training conditional maximum entropy models on massive data sets requires significant computational resources. We examine three common distributed training methods for conditional maxent: a distributed gradient computation method, a majority vote method, and a mixture weight method. We analyze and compare the CPU and network time complexity of each of these(More)
We present a system which can recognize the contents of your meal from a single image, and then predict its nutritional contents, such as calories. The simplest version assumes that the user is eating at a restaurant for which we know the menu. In this case, we can collect images offline to train a multi-label classifier. At run time, we apply the(More)
Diabetic retinopathy, an eye disorder caused by diabetes, is the primary cause of blindness in America and over 99% of cases in India. India and China currently account for over 90 million diabetic patients and are on the verge of an explosion of diabetic populations. This may result in an unprecedented number of persons becoming blind unless diabetic(More)
The availability of commodity depth sensors such as Kinect has enabled development of methods which can densely reconstruct arbitrary scenes. While the results of these methods are accurate and visually appealing, they are quite often incomplete. This is either due to the fact that only part of the space was visible during the data capture process or due to(More)
The cost of large scale data collection and annotation often makes the application of machine learning algorithms to new tasks or datasets prohibitively expensive. One approach circumventing this cost is training models on synthetic data where annotations are provided automatically. Despite their appeal, such models often fail to generalize from synthetic(More)
The Internet topology has witnessed significant changes over the years with the rise and fall of several Internet Service Providers (ISP). In this paper, we propose a new economic model that can aid in understanding the evolution of the Internet topology and provide insight into why certain ISPs fail and others succeed. Our economic model is motivated by(More)
Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. One appealing alternative is rendering synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images often fail to generalize to real images. To address this(More)