Here we describe some of our running projects. Each of these projects focusses on the problem of matching – in a broad sense of the word – treatments to individuals. That’s personalization. However, as opposed to many attempts in the social and medical sciences, we take a computational approach to the problem of personalizing treatments (or web content, or prices, or medical interventions, or…).

Very briefly, we view the “problem” of personalization as follows: there is some function $\vec{y} = f(\vec{x},\vec{a})$ that describes the relationship between properties of a person (or broader, the current state of the world) as encoded in the feature vector $\vec{x}$) and possible treatments $\vec{a}$ to some set of observable outcome(s) $\vec{y}$. Next, there is some function $r = g(\vec{y}$ mapping the observable outcome(s) to a reward, and our objective is to $\underset{a}{\mathrm{argmax}} g(f(\vec{x}, \vec{a}))$; e.g., choose the treatment such that the reward is maximized.

The above seems like a simple optimization problem, but in practice choosing the treatments such that for each individual (or for a society) the rewards are maximized can be quite difficult. The process is primarily complicated by the fact that $f()$ is mostly not known, and needs to be estimated. Currently, $f()$ is often estimated using randomized clinical trials (RCT) in which two possible choices of a are compared, and the “best” one is selected. This method a) ignores the uncertainty in the resulting choice, b) often neglects the size of the total “to-be treated” population, and c) often does not carry information from one trial to the next. As a result, the RCT is inefficient, ill-suited for true one-to-one personalization, and cumbersome in cases in which treatments quickly change. Our lab develops alternative (statistical) methods of estimating treatment effects at the individual level. We study how machine learning methods, combined with active exploration, can be more efficient for evaluation treatment effects than the standard RCT. We apply out methodology in several domains.

Finally, note that for notational ease we omitted a bunch of subscripts in the presentation above, but obviously we might interact with the same person multiple times, with persons from the same group, or people connected in some network; all of which impose dependencies in the observed data that should be accounted for.

Many other difficulties can be encountered, and these differ for different fields of application. Some, well actually most, projects below focus specifically on individual issues. However, there is a combined goal: developing effective and efficient methods for treatment personalization.

Bootstrap Thompson Sampling

Contextual bandit problems emerge everywhere on the Web: The selection of Web content given features of a user with the explicit aim to optimize some criterion generally gives rise to a problem that can be framed as sequentially ($t=1 , \dots, T$) choosing actions $a \in \{A\}$ given context $x \in \{X\}$ with the aim of maximizing the cumulative rewards $\sum_{t=1}^{T} r_t$. Thompson sampling provides an effective strategy to solve these problems. However, Thompson sampling can be computationally infeasible. In this project we work on (online) Bootstrap approximations for use in Thompson sampling.


Dr. Dean Eckles
Dr. Maurits Kaptein



Lock in Feedback for sequential experiments

We often encounter situations in which an experimenter wants to find, by sequential experimentation, $x_{max} = arg max_x f (x)$, where $f (x)$ is a (possibly unknown) function of a well controllable variable x. Taking inspiration from physics and engineering, we have designed a new method to address this problem. The project is an interdisciplinary collaboration in which methods used in Physics are applied to optimization problems in the Social Sciences.


Prof. Dr. Davide Iannuzzi
Dr. Maurits Kaptein
Robin van Emden




Solving (bandit) decision problems in data streams in production environment is challenging. With the python module STREAMY ( we are trying to develop a framework for streaming /  online solutions to bandit problems which can be used to experiment with novel solutions in production environments.


Maurits Kaptein
Jules Kruijswijk
Vincent Gijsen
Robin van Emden

Online learning of multi-level models

Hierarchical or mixed-effect (or multi-level) models provide a useful framework for estimation in the context of grouped data. However, fitting (G)LMM’s in data streams can be computationally demanding. In this project we develop online (approximate) solutions to fit these models in data streams. One of the first results of this project is SEMA, an online approximation to EM.


Lianne Ippel
Dr. Maurits Kaptein
Prof. Dr. Jeroen Vermunt


Personalized Persuasion in Ambient Intelligence

There is a growing interest in persuasive technologies: Interactive applications specifically designed to change the attitudes and behaviors of their users. In this project we examined the opportunity to personalize persuasive messages.


Prof. Dr. Emile Aarts
Prof. Dr. Panos Markopoulos
Prof. Dr. Boris de Ruyter
Dr. Maurits Kaptein


See also

More coming soon:

Here are a few upcoming projects we will describe in more detail soon:

  • Bayesian Adaptive Clinical Trials (PhD project, Xynthia Kavelaars, w. Joris Mulder)
  • eHealth personalization (PhD project, Ylva Hendrink, w. Sebastiaan Peek & Inge Bongers)
  • Data Science methods for Personalization in healthcare (PhD project Bas Willemse, w. Fleur Hasaart)
  • Predicting Churn (PhD project TBD, w. Aurelie Lemmens)

And even more…

More projects are running, often in collaboration with industry (e.g. sciencerockstars b.v., webpower b.v.). Other interesting collaborations include the development of a novel statistics book for HCI researchers ( together with Prof. Dr. Judy Robertson and the development of adaptive pricing strategies together with Prof. Dr. Petri Parvinen.