Skip to search form
Skip to main content
Skip to account menu
Semantic Scholar
Semantic Scholar's Logo
Search 225,442,341 papers from all fields of science
Search
Sign In
Create Free Account
RSS Bandit
Known as:
Bandit (disambiguation)
RSS Bandit is an open source RSS/Atom aggregator based on the Microsoft .NET framework. It was originally released as a code sample in a series of…
Expand
Wikipedia
(opens in a new tab)
Create Alert
Alert
Related topics
Related topics
14 relations
.NET Framework
Atom (standard)
Background Intelligent Transfer Service
Comparison of feed aggregators
Expand
Papers overview
Semantic Scholar uses AI to extract papers important to this topic.
2019
2019
Incorporating intent propensities in personalized next best action recommendation
Yuxi Zhang
,
Kexin Xie
ACM Conference on Recommender Systems
2019
Corpus ID: 202639684
Next best action (NBA) is a technique that is widely considered as the best practice in modern personalized marketing. It takes…
Expand
2014
2014
Bidding Strategies in QoS-Aware Cloud Systems Based on N-Armed Bandit Problems
Marco Abundo
,
Valerio Di Valerio
,
V. Cardellini
,
F. L. Presti
Symposium on Network/Cloud Computing and…
2014
Corpus ID: 13317926
In this paper we consider a set of Software as a Service (SaaS) providers, that offer a set of Web services using the Cloud…
Expand
2014
2014
How Do Humans Handle the Dilemma of Exploration and Exploitation in Sequential Decision Making?
N. Namiki
,
Kuratomo Oyo
,
Tatsuji Takahashi
International Conference on Bio-inspired…
2014
Corpus ID: 18246126
In an uncertain environment, decision-making meets two opposing demands. One is to explore new information, while the other is to…
Expand
2014
2014
How does the sensitivity of multileaving methods compare to that of interleaving methods ?
Anne Schuth
,
F. Sietsma
,
Shimon Whiteson
,
Damien Lefortier
,
M. de Rijke
2014
Corpus ID: 43345221
Evaluation methods for information retrieval systems come in three types: offline evaluation, using static data sets annotated…
Expand
2010
2010
A dynamic programming strategy to balance exploration and exploitation in the bandit problem
O. Caelen
,
Gianluca Bontempi
Annals of Mathematics and Artificial Intelligence
2010
Corpus ID: 21858176
The K-armed bandit problem is a well-known formalization of the exploration versus exploitation dilemma. In this learning problem…
Expand
2008
2008
On the evolution of the expected gain of a greedy action in the bandit problem
O. Caelen
,
Gianluca Bontempi
2008
Corpus ID: 55386972
The K-armed bandit problem is a well-known formalization of the exploration versus exploitation dilemma. In a K-armed bandit…
Expand
2000
2000
The S ocial Bandit aft er Apartheid
Leola Johnson
2000
Corpus ID: 114897250
By clicking accept or continuing to use the site, you agree to the terms outlined in our
Privacy Policy
(opens in a new tab)
,
Terms of Service
(opens in a new tab)
, and
Dataset License
(opens in a new tab)
ACCEPT & CONTINUE