• Corpus ID: 13874643

Siamese Neural Networks for One-Shot Image Recognition

@inproceedings{Koch2015SiameseNN,
  title={Siamese Neural Networks for One-Shot Image Recognition},
  author={Gregory R. Koch},
  year={2015}
}
The process of learning good features for machine learning applications can be very computationally expensive and may prove difficult in cases where little data is available. A prototypical example of this is the one-shot learning setting, in which we must correctly make predictions given only a single example of each new class. In this paper, we explore a method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs. Once a network has… 

Figures and Tables from this paper

Few-shot malicious traffic classification based on Siamese Neural Network

  • Kailin WuPan WangZixuan Wang
  • Computer Science
    2021 IEEE 23rd Int Conf on High Performance Computing & Communications; 7th Int Conf on Data Science & Systems; 19th Int Conf on Smart City; 7th Int Conf on Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys)
  • 2021
This article proposes a method based on a one-dimensional convolutional siamese neural network, which uses a unique structure to naturally rank the similarities between inputs, and can obtain higher recognition accuracy than the traditional methods of smote and GAN generated data training.

Multi-Resolution Siamese Networks for One-Shot Learning

This work proposes an improved architecture and a novel training method that increases a 1-shot 5-way classification accuracy on 5 entirely novel classes by around 5%, 19%, 18% and 13% respectively compared to vanilla Siamese networks when tested on Omniglot, Tiny-Imagenet, CIFAR100 as well as a custom dataset recorded with an event-driven camera.

One-Shot Learning in Discriminative Neural Networks

A Bayesian procedure for updating a pretrained convnet to classify a novel image category for which data is limited is explored, which demonstrates competitive performance with state-of-the-art methods whilst also being consistent with 'normal' methods for training deep networks on large data.

One-Shot Learning-Based Handwritten Word Recognition

This paper explores a one-shot learning approach to recognizing handwritten words using Siamese networks to classify the handwritten images at the word level using a convolutional architecture.

Compare Learning: Bi-Attention Network for Few-Shot Learning

  • Li KeMeng PanWeigao WenDong Li
  • Computer Science
    ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2020
A novel approach named Bi-attention network to compare the instances is proposed, which can measure the similarity between embeddings of instances precisely, globally and efficiently and is verified on two benchmarks.

One-shot Learning with Siamese Networks for Environmental Audio

The results show that convolutional siamese networks are indeed a valid approach to the difficult one-shot classification task for environmental audio.

One-Shot Learning for Handwritten Character Recognition

Though shadowed by more modern approaches, this project reveals that even relatively simple deep learning models provide compelling results in many domains – including one-shot learning.

Make SVM great again with Siamese kernel for few-shot learning

An end-to-end learning framework for training adaptive kernel SVMs, which eliminates the problem of choosing a correct kernel and good features for SVMs and is extended with Support Vector Machines (SVM) working mechanism.

One Shot Logo Recognition Based on Siamese Neural Networks

This work presents an approach for one-shot logo recognition that relies on a Siamese neural network (SNN) embedded with a pre-trained model that is fine-tuned on a challenging logo dataset. Although

Weakly Supervised One-Shot Detection with Attention Similarity Networks

A novel neural network architecture is presented that manages to simultaneously identify and localise instances of classes unseen at training time and considerably outperforms the baseline methods for the weakly supervised one-shot detection task.
...

References

SHOWING 1-10 OF 32 REFERENCES

Very Deep Convolutional Networks for Large-Scale Image Recognition

This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.

One-shot learning by inverting a compositional causal process

A Hierarchical Bayesian model based on com-positionality and causality that can learn a wide range of natural (although simple) visual concepts, generalizing in human-like ways from just one image.

One shot learning of simple visual concepts

A generative model of how characters are composed from strokes is introduced, where knowledge from previous characters helps to infer the latent strokes in novel characters, using a massive new dataset of handwritten characters.

ImageNet classification with deep convolutional neural networks

A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.

Learning a similarity metric discriminatively, with application to face verification

The idea is to learn a function that maps input patterns into a target space such that the L/sub 1/ norm in the target space approximates the "semantic" distance in the input space.

Zero-shot Learning with Semantic Output Codes

A semantic output code classifier which utilizes a knowledge base of semantic properties of Y to extrapolate to novel classes and can often predict words that people are thinking about from functional magnetic resonance images of their neural activity, even without training examples for those words.

A Fast Learning Algorithm for Deep Belief Nets

A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.

Learning Deep Architectures for AI

The motivations and principles regarding learning algorithms for deep architectures, in particular those exploiting as building blocks unsupervised learning of single-layer modelssuch as Restricted Boltzmann Machines, used to construct deeper models such as Deep Belief Networks are discussed.

One-shot learning of object categories

It is found that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully.

A Bayesian approach to unsupervised one-shot learning of object categories

This work presents a method for learning object categories from just a few images, based on incorporating "generic" knowledge which may be obtained from previously learnt models of unrelated categories, in a variational Bayesian framework.