Joaquim Marques de Sá

Learn More
BACKGROUND Congenital deletions affecting 3q11q23 have rarely been reported and only five cases have been molecularly characterised. Genotype-phenotype correlation has been hampered by the variable sizes and breakpoints of the deletions. In this study, 14 novel patients with deletions in 3q11q23 were investigated and compared with 13 previously reported(More)
BACKGROUND Floating-Harbor syndrome (FHS) is a rare condition characterized by short stature, delays in expressive language, and a distinctive facial appearance. Recently, heterozygous truncating mutations in SRCAP were determined to be disease-causing. With the availability of a DNA based confirmatory test, we set forth to define the clinical features of(More)
Lenz-Majewski syndrome (LMS) is a syndrome of intellectual disability and multiple congenital anomalies that features generalized craniotubular hyperostosis. By using whole-exome sequencing and selecting variants consistent with the predicted dominant de novo etiology of LMS, we identified causative heterozygous missense mutations in PTDSS1, which encodes(More)
The last years have witnessed an increasing attention to entropy-based criteria in adaptive systems. Several principles were proposed based on the maximization or minimization of entropic cost functions. We propose a new type of neural network classifiers with multilayer perceptron (MLP) architecture, but where the usual mean square error minimization(More)
Hierarchical clustering is a stepwise clustering method usually based on proximity measures between objects or sets of objects from a given data set. The most common proximity measures are distance measures. The derived proximity matrices can be used to build graphs, which provide the basic structure for some clustering methods. We present here a new(More)
Transfer Learning is a paradigm in machine learning to solve a target problem by reusing the learning with minor modifications from a different but related source problem. In this paper we propose a novel feature transference approach, especially when the source and the target problems are drawn from different distributions. We use deep neural networks to(More)
Entropy-based cost functions are enjoying a growing attractiveness in unsupervised and supervised classification tasks. Better performances in terms both of error rate and speed of convergence have been reported. In this letter, we study the principle of error entropy minimization (EEM) from a theoretical point of view. We use Shannon's entropy and study(More)
We propose a new cost function for neural network classification: the error density at the origin. This method provides a simple objective function that can be easily plugged in the usual backpropagation algorithm, giving a simple and efficient learning scheme. Experimental work shows the effectiveness and superiority of the proposed method when compared to(More)
One way of using the entropy criteria in learning systems is to minimize the entropy of the error between two variables: typically, one is the output of the learning system and the other is the target. This framework has been used for regression. In this paper we show how to use the minimization of the entropy of the error for classification. The(More)
In this paper we address some open questions on the recently proposed Zero-Error Density Maximization algorithm for MLP training. We propose a new version of the cost function that solves a training problem encountered in previous work and prove that the use of a nonparametric density estimator preserves the optimal solution. Some experiments are reported(More)