Learn More
Multiscale transforms designed to process analog and discrete-time signals and images cannot be directly applied to analyze high-dimensional data residing on the vertices of a weighted graph, as they do not capture the intrinsic topology of the graph data domain. In this paper, we adapt the Laplacian pyramid transform for signals on Euclidean domains so(More)
We consider the transductive learning problem when the labels belong to a continuous space. Through the use of spectral graph wavelets, we explore the benefits of multiresolution analysis on a graph constructed from the labeled and unlabeled data. The spectral graph wavelets behave like discrete multiscale differential operators on graphs, and thus can(More)
this paper we wish to introduce a method to reconstruct large size Welch Bound Equality (WBE) codes from small size WBE codes. The advantage of these codes is that the implementation of ML decoder for the large size codes is reduced to implementation of ML decoder for the core codes. This leads to a drastic reduction of the computational cost of ML decoder.(More)
Surprise is a ubiquitous concept describing a wide range of phenomena from unexpected events to behavioral responses. We propose a measure of surprise, to arrive at a new framework for surprise-driven learning. There are two components to this framework: (i) a confidence-adjusted surprise measure to capture environmental statistics as well as subjective(More)
Surprise is a central concept in learning, attention and the study of the neural basis of behaviour. However, how surprise affects learning and more specifically, how surprise affects synaptic learning rules in neural networks is largely undetermined. Here we study how surprise facilitates learning in different environments and how surprise can potentially(More)
Surprise is informative because it drives attention and modifies learning. Not only has it been described at different stages of neural processing [1], but it is a central concept in higher levels of abstraction such as learning and memory formation [2]. Several methods, including Baye-sian and information theoretical approaches, have been used to quantify(More)
Surprise is informative because it drives attention [IB09] and modifies learning [SD00]. Correlates of surprise have been observed at different stages of neural processing, and found to be relevant for learning and memory formation [RR03]. Although surprise is ubiquitous, there is neither a widely accepted theory that quantitatively links surprise to(More)
• Model parameters were fit to maximize the log likelihood: • Bayesian information criterion (BIC) used to «rank» models: Selected Model: • 1 st • 2 nd • 3 rd The task Numbers are drawn from a normal distribution. With hazard rate H, the mean can abruptly change. The goal is to sequentially estimate the underlying mean from noisy observations. Abstract(More)