Mohammad Javad Faraji

Learn More
Multiscale transforms designed to process analog and discrete-time signals and images cannot be directly applied to analyze high-dimensional data residing on the vertices of a weighted graph, as they do not capture the intrinsic topology of the graph data domain. In this paper, we adapt the Laplacian pyramid transform for signals on Euclidean domains so(More)
We consider the transductive learning problem when the labels belong to a continuous space. Through the use of spectral graph wavelets, we explore the benefits of multiresolution analysis on a graph constructed from the labeled and unlabeled data. The spectral graph wavelets behave like discrete multiscale differential operators on graphs, and thus can(More)
—In this paper we wish to introduce a method to reconstruct large size Welch Bound Equality (WBE) codes from small size WBE codes. The advantage of these codes is that the implementation of ML decoder for the large size codes is reduced to implementation of ML decoder for the core codes. This leads to a drastic reduction of the computational cost of ML(More)
Surprise is a central concept in learning, attention and the study of the neural basis of behaviour. However, how surprise affects learning and more specifically, how surprise affects synaptic learning rules in neural networks is largely undetermined. Here we study how surprise facilitates learning in different environments and how surprise can potentially(More)
One of the most frequent problems in both decision making and reinforcement learning (RL) is expectation maximization involving functionals such as reward or utility. Generally, these problems consist of computing the optimal solution of a density function. Instead of trying to find this exact solution, a common approach is to approximate it through a(More)
Surprise is informative because it drives attention [1] and modifies learning [2]. Correlates of surprise have been observed at different stages of neural processing, and found to be relevant for learning and memory formation [3]. Although surprise is ubiquitous, there is neither a widely accepted theory that quantitatively links surprise to observed(More)
Surprise is informative because it drives attention and modifies learning. Not only has it been described at different stages of neural processing [1], but it is a central concept in higher levels of abstraction such as learning and memory formation [2]. Several methods, including Bayesian and information theoretical approaches, have been used to quantify(More)
Surprise is informative because it drives attention [IB09] and modifies learning [SD00]. Correlates of surprise have been observed at different stages of neural processing, and found to be relevant for learning and memory formation [RR03]. Although surprise is ubiquitous, there is neither a widely accepted theory that quantitatively links surprise to(More)