Masatran, Rajasekaran

IIT MadrasYahooBTech, IIIT Hyderabad.

A Latent-Variable Lattice Model

Abstract: Markov random field (MRF) learning is intractable, and its approximation algorithms are computationally expensive. We target a small subset of MRF that is used frequently in computer vision. We characterize this subset with three concepts: Lattice, Homogeneity, and Inertia; and design a non-markov model as an alternative. Our goal is robust learning from small datasets. Our learning algorithm uses vector quantization and, at time complexity O(u log(u)) for a dataset of u pixels, is much faster than that of general-purpose MRF.

A Marginal-Based Technique for Distribution Estimation

Abstract: Estimating a distribution over a vector random variable, given a source of independent random instances drawn from the distribution, is a standard problem in machine learning. Frequently, the components have limited dependency between each other, and this aspect can be exploited for estimation with fewer samples. We propose a novel technique that estimates the distribution efficiently, using one-dimensional marginals. Like naive bayes, our technique is suited for incremental estimation. Compared to the naive bayes assumption, our technique provides better accuracy. Experiments support our claims, for datasets of different dimensionality.

A Wavelet Decomposition of Probability Distributions

Two important goals in image analysis are identifying the object at a specified location, and locating a specified object in an image. Many of the best algorithms to do this use histograms. Histograms are efficient for indexing into a large database of models. They are robust to occlusion and change in view, and can distinguish between a large number of objects. Illumination-invariance is an important consideration in computer vision. Cross-bin histogram distance measures like match distance and earth-mover distance (EMD) work well for this. Our objective is to provide an improved distance measure between histograms on the real line. We introduce an orthonormal basis for the space of probability distributions, making it a hilbert space.