Large Scale Online Learning of Image Similarity Through Ranking
- Gal Chechik, Varun Sharma, Uri Shalit, Samy Bengio
- Computer ScienceJournal of machine learning research
- 9 June 2009
OASIS is an online dual approach using the passive-aggressive family of learning algorithms with a large margin criterion and an efficient hinge loss cost, which suggests that query independent similarity could be accurately learned even for large scale data sets that could not be handled before.
Information Bottleneck for Gaussian Variables
- Gal Chechik, A. Globerson, Naftali Tishby, Yair Weiss
- Computer ScienceJournal of machine learning research
- 9 December 2003
A formal definition of the general continuous IB problem is given and an analytic solution for the optimal representation for the important case of multivariate Gaussian variables is obtained, in terms of the eigenvalue spectrum.
StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators
- Rinon Gal, Or Patashnik, Haggai Maron, Gal Chechik, D. Cohen-Or
- Computer ScienceArXiv
- 2 August 2021
Leveraging the semantic power of large scale Contrastive-Language-Image-Pretraining (CLIP) models, this work presents a text-driven method that allows shifting a generative model to new domains, without having to collect even a single image.
Euclidean Embedding of Co-occurrence Data
- A. Globerson, Gal Chechik, Fernando C Pereira, Naftali Tishby
- Computer ScienceJournal of machine learning research
- 1 December 2004
This paper describes a method for embedding objects of different types, such as images and text, into a single common Euclidean space, based on their co-occurrence statistics, and shows that it consistently and significantly outperforms standard methods of statistical correspondence modeling.
Learning from Noisy Large-Scale Datasets with Minimal Supervision
- Andreas Veit, N. Alldrin, Gal Chechik, Ivan Krasin, A. Gupta, Serge J. Belongie
- Computer ScienceComputer Vision and Pattern Recognition
- 6 January 2017
An approach to effectively use millions of images with noisy annotations in conjunction with a small subset of cleanly-annotated images to learn powerful image representations and is particularly effective for a large number of classes with wide range of noise in annotations.
An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion
- Rinon Gal, Yuval Alaluf, D. Cohen-Or
- Computer ScienceArXiv
- 2 August 2022
This work uses only 3 - 5 images of a user-provided concept to represent it through new “words” in the embedding space of a frozen text-to-image model, which can be composed into natural language sentences, guiding personalized creation in an intuitive way.
Adaptive Confidence Smoothing for Generalized Zero-Shot Learning
- Y. Atzmon, Gal Chechik
- Computer ScienceComputer Vision and Pattern Recognition
- 24 December 2018
Adaptive confidence smoothing (COSMO) is the first model that closes the gap and surpasses the performance of generative models for GZSL, even-though it is a light-weight model that is much easier to train and tune.
Learning the Pareto Front with Hypernetworks
- Aviv Navon, Aviv Shamsian, Gal Chechik, Ethan Fetaya
- Computer ScienceInternational Conference on Learning…
- 8 October 2020
The problem of learning the entire Pareto front, with the capability of selecting a desired operating point on the front after training is tackled, and PFL opens the door to new applications where models are selected based on preferences that are only available at run time.
Reduction of Information Redundancy in the Ascending Auditory Pathway
- Gal Chechik, Michael J. Anderson, Omer Bar-Yosef, E. Young, Naftali Tishby, I. Nelken
- Biology, PsychologyNeuron
- 3 August 2006
An Online Algorithm for Large Scale Image Similarity Learning
- Gal Chechik, Uri Shalit, Varun Sharma, Samy Bengio
- Computer ScienceNIPS
- 7 December 2009
The non-metric similarities learned by OASIS can be transformed into metric similarities, achieving higher precisions than similarities that are learned as metrics in the first place, suggesting an approach for learning a metric from data that is larger by orders of magnitude than was handled before.
...
...