• Corpus ID: 244799698

Improving Teacher-Student Interactions in Online Educational Forums using a Markov Chain based Stackelberg Game Model

  title={Improving Teacher-Student Interactions in Online Educational Forums using a Markov Chain based Stackelberg Game Model},
  author={Rohith Dwarakanath Vallam and Priyanka Bhatt and Debmalya Mandal and Y. Narahari},
With the rapid proliferation of the Internet, the area of education has undergone a massive transformation in terms of how students and instructors interact in a classroom. Online learning now takes more than one form, including the use of technology to enhance a face-to-face class, a hybrid class that combines both face-to-face meetings and online work, and fully online courses. Further, online classrooms are usually composed of an online education forum (OEF) where students and instructor… 

Blended Learning in ESL: Perceptions about paradigm shift in English Language Institutions of Punjab, Pakistan

  • S. SiddiqR. Hussain
  • Education
    Journal of Humanities, Social and Management Sciences (JHSMS)
  • 2022
Tremendous technological developments have revolutionized educational practices and experiences of English as a Second Language (ESL) teachers and learners at an unprecedented rate. One such



Quality-control mechanism utilizing worker's confidence for crowdsourced tasks

An indirect mechanism is designed that enables a worker to declare her confidence by choosing a desirable reward plan from the set of plans that correspond to different confidence intervals, and ensures that choosing the plan matching the worker's true confidence maximizes her expected utility.

Incentivizing participation in online forums for education

We present a game-theoretic model for online forums for education, where students in a class can post questions to the forum and seek responses from the instructor or other students in the class. We

Playing games for security: an efficient exact algorithm for solving Bayesian Stackelberg games

This paper considers Bayesian Stackelberg games, in which the leader is uncertain about the types of adversary it may face, and presents an efficient exact algorithm for finding the optimal strategy for the leader to commit to in these games.

Finite markov chains, volume 356

  • van Nostrand Princeton,
  • 1960

Lumpability and time reversibility in the aggregation-disaggregation method for large markov chains

It is shown that ordinary lumpability eliminates the aggregation procedure and a new algorithm is developed which produces the ergodic probability vector in one step for a class of Markov chains including the time reversible ones.

Lumpability and Commutativity of Markov Processes

Abstract We introduce the concepts of lumpability and commutativity of a continuous time discrete state space Markov process, and provide a necessary and sufficient condition for a lumpable Markov

Exact and ordinary lumpability in finite Markov chains

  • P. Buchholz
  • Mathematics
    Journal of Applied Probability
  • 1994
Exact and ordinary lumpability in finite Markov chains is considered. Both concepts naturally define an aggregation of the Markov chain yielding an aggregated chain that allows the exact

Crowdsourcing with endogenous entry

This work investigates the design of mechanisms to incentivize high quality outcomes in crowdsourcing environments with strategic agents, when entry is an endogenous, strategic choice, and shows that free entry can improve the quality of the best contribution over a winner-take-all contest with no taxes.

Implementing optimal outcomes in social computing: a game-theoretic approach

It is shown that optimal outcomes can never be implemented by contests if the system can rank the qualities of contributions perfectly, but if there is noise in the contributions' rankings, then the mechanism designer can again induce agents to follow strategies that maximize his utility.

Discrete Stochastic Processes

This chapter discusses Markov Chains with Countably Infinite State Spaces, Random Walks and Martingales, and Discrete State Markov Processes, all of which apply to the Renewal Processes.