Prateek Bhakta

  • Citations Per Year
Learn More
We study the mixing time of a Markov chain Mnn on permutations that performs nearest neighbor transpositions in the non-uniform setting, a problem arising in the context of self-organizing lists. We are given “positively biased” probabilities {pi,j ≥ 1/2} for all i < j and let pj,i = 1− pi,j . In each step, the chain Mnn chooses two adjacent elements k, and(More)
Graded posets frequently arise throughout combinatorics, where it is natural to try to count the number of elements of a fixed rank. These counting problems are often #Pcomplete, so we consider approximation algorithms for counting and uniform sampling. We show that for certain classes of posets, biased Markov chains that walk along edges of their Hasse(More)
Sampling permutations from Sn is a fundamental problem from probability theory. The nearest neighbor transposition chain Mnn is known to converge in time Θ(n log n) in the uniform case [18] and time Θ(n) in the constant bias case, in which we put adjacent elements in order with probability p 6= 1/2 and out of order with probability 1− p [2]. Here we(More)
To my parents Tarulata and Jayesh who have all my life always been my strongest supporters. To my sister Smita, who was also there, I guess. iii ACKNOWLEDGEMENTS I rst and foremost thank my advisor Dana Randall for supporting and encouraging me in my time at Georgia Tech. Without her patient mentorship, I would be greatly diminished as an academic. Her(More)
The Schelling Segregation Model was proposed by Thomas Schelling in 1971 as a means of explaining possible causes of racial segregation in cities. He considered residents of two types, say red and blue, where each person prefers the majority of his or her neighbors to have the same color. He showed through simulations that even mild preferences of this type(More)
Markov chains are fundamental tools used throughout the sciences and engineering; the design and analysis of Markov Chains has been a focus of theoretical computer science for the last 20 years. A Markov Chain takes a random walk in a large state space Ω, converging to a target stationary distribution π over Ω. The number of steps needed for the random walk(More)
  • 1