Corpus ID: 29159537

Learning Hierarchical Visual Representations in Deep Neural Networks Using Hierarchical Linguistic Labels

@article{Peterson2018LearningHV,
  title={Learning Hierarchical Visual Representations in Deep Neural Networks Using Hierarchical Linguistic Labels},
  author={Joshua C. Peterson and Paul Soulos and Aida Nematzadeh and T. Griffiths},
  journal={ArXiv},
  year={2018},
  volume={abs/1805.07647}
}
Modern convolutional neural networks (CNNs) are able to achieve human-level object classification accuracy on specific tasks, and currently outperform competing models in explaining complex human visual representations. However, the categorization problem is posed differently for these networks than for humans: the accuracy of these networks is evaluated by their ability to identify single labels assigned to each image. These labels often cut arbitrarily across natural psychological taxonomies… Expand
From convolutional neural networks to models of higher-level cognition (and back again).
TLDR
This paper asks whether convolutional neural networks might also provide a basis for modeling higher-level cognition, focusing on the core phenomena of similarity and categorization, and develops a toolkit for the integration of cognitively motivated constraints back into CNN training paradigms. Expand
Leveraging deep neural networks to study human cognition
TLDR
A methodology by which psychological research can confidently leverage representations learned by deep neural networks to model and predict complex human behavior, potentially extending the scope of the field. Expand
Learning deep taxonomic priors for concept learning from few positive examples
TLDR
This paper forms a training paradigm that allows a meta-learning algorithm to solve the problem of concept learning from few positive examples and discovers a taxonomic prior useful for learning novel concepts even from held-out supercategories and mimics human generalization behavior. Expand
NBDT: Neural-Backed Decision Trees
TLDR
This work proposes Neural-Backed Decision Trees (NBDTs), modified hierarchical classifiers that use trees constructed in weight-space that achieve interpretability and neural network accuracy and match state-of-the-art neural networks on CIFar10, CIFAR100, TinyImageNet, and ImageNet. Expand
Reverse Engineering Creativity into Interpretable Neural Networks
TLDR
This paper presents a novel deep learning framework for creativity, called INNGenuity, which aims at the resolution of the various flaws of current AI learning architectures, which stem from the opacity of their models. Expand
Learning Visually Grounded Representations with Sketches
We test whether visually grounded meaning representations for words can be improved by grounding to sketches rather than to natural images. Intuitively, sketches encode higher-level abstractExpand
Interpretable Few-Shot Image Classification with Neural- Backed Decision Trees
Interpretable Few-Shot Image Classification with Neural-Backed Decision Trees

References

SHOWING 1-10 OF 25 REFERENCES
Adapting Deep Network Features to Capture Psychological Representations
TLDR
A method for adapting deep features to align with human similarity judgments, resulting in image representations that can potentially be used to extend the scope of psychological experiments. Expand
Basic Level Categorization Facilitates Visual Object Recognition
TLDR
This work proposes a network optimization strategy inspired by both of the developmental trajectory of children's visual object recognition capabilities, and Bar (2003), who hypothesized that basic level information is carried in the fast magnocellular pathway through the prefrontal cortex (PFC) and then projected back to inferior temporal cortex (IT), where subordinate level categorization is achieved. Expand
Cognitive Psychology for Deep Neural Networks: A Shape Bias Case Study
TLDR
This work proposes to address the interpretability problem in modern DNNs using the rich history of problem descriptions, theories and experimental methods developed by cognitive psychologists to study the human mind, and demonstrates the capability of tools from cognitive psychology for exposing hidden computational properties of DNN's while concurrently providing us with a computational model for human word learning. Expand
Deep Neural Networks Predict Category Typicality Ratings for Images
TLDR
It is found that deep convolutional networks trained for classification have substantial predictive power, unlike simpler features computed from the same massive dataset, showing how typicality might emerge as a byproduct of a complex model trained to maximize classification performance. Expand
Pixels to Voxels: Modeling Visual Representation in the Human Brain
TLDR
The fit models provide a new platform for exploring the functional principles of human vision, and they show that modern methods of computer vision and machine learning provide important tools for characterizing brain function. Expand
CNN-RNN: A Unified Framework for Multi-label Image Classification
TLDR
The proposed CNN-RNN framework learns a joint image-label embedding to characterize the semantic label dependency as well as the image- label relevance, and it can be trained end-to-end from scratch to integrate both information in a unified framework. Expand
Weakly Supervised Image Classification with Coarse and Fine Labels
TLDR
A model based on convolutional neural networks (CNNs) is proposed to address image classification in a weakly supervised scenario where the training data are annotated at different levels of abstractions and significantly outperforms the naive approach that discards the extra coarsely labeled data. Expand
Rethinking the Inception Architecture for Computer Vision
TLDR
This work is exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. Expand
Human Object-Similarity Judgments Reflect and Transcend the Primate-IT Object Representation
TLDR
It is shown that objects that elicit similar activity patterns in human IT (hIT) tend to be judged as similar by humans, and IT was more similar to monkey IT than to human judgments. Expand
Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints
TLDR
This work proposes to inject corpus-level constraints for calibrating existing structured prediction models and design an algorithm based on Lagrangian relaxation for collective inference to reduce the magnitude of bias amplification in multilabel object classification and visual semantic role labeling. Expand
...
1
2
3
...