Efficient learnability using the state merging algorithm is known for a subclass of probabilistic automata termed µ-distinguishable. In this paper, we prove that state merging algorithms can be extended to efficiently learn a larger class of automata. In particular, we show learnability of a subclass which we call µ2-distinguishable. Using an ana-log of the… (More)
We present a framework for efficient extrapolation of reduced rank approximations, graph kernels, and locally linear embeddings (LLE) to unseen data. We also present a principled method to combine many of these kernels and then extrapolate them. Central to our method is a theorem for matrix approximation, and an extension of the representer theorem to… (More)
We present a principled method to combine kernels under joint regularization constraints. Central to our method is an extension of the representer theorem for handling multiple joint regularization constraints. Experimental evidence shows the feasibility of our approach.
Except where otherwise indicated, this thesis is my own original work. iii Acknowledgements This thesis would not have been possible without the generous scholarships from NICTA and ANU, and without the help and support of many friends, colleagues, and my family. Foremost, I would like to thank my ex-supervisory panel chair S.V.N. (Vishy) Vish-wanathan. For… (More)