Learning Everything about Anything: Webly-Supervised Visual Concept Learning

@article{Divvala2014LearningEA,
  title={Learning Everything about Anything: Webly-Supervised Visual Concept Learning},
  author={Santosh Kumar Divvala and Ali Farhadi and Carlos Guestrin},
  journal={2014 IEEE Conference on Computer Vision and Pattern Recognition},
  year={2014},
  pages={3270-3277}
}
Recognition is graduating from labs to real-world applications. While it is encouraging to see its potential being tapped, it brings forth a fundamental challenge to the vision researcher: scalability. How can we learn a model for any concept that exhaustively covers all its appearance variations, while requiring minimal or no human supervision for compiling the vocabulary of visual variance, gathering the training images and annotations, and learning the models? In this paper, we introduce a… CONTINUE READING

From This Paper

Figures, tables, and topics from this paper.

Citations

Publications citing this paper.
SHOWING 1-10 OF 166 CITATIONS, ESTIMATED 43% COVERAGE

FILTER CITATIONS BY YEAR

2014
2019

CITATION STATISTICS

  • 19 Highly Influenced Citations

  • Averaged 40 Citations per year over the last 3 years

References

Publications referenced by this paper.
SHOWING 1-10 OF 50 REFERENCES

Quantitative analysis of culture using millions of digitized books

  • J.-B. Michel
  • In Science,
  • 2010
Highly Influential
3 Excerpts

Similar Papers

Loading similar papers…