# Dan Spielman: Discrepancy Theory and Randomized Controlled Trials (2nd lecture)

**Tid: **
Ti 2024-09-17 kl 13.15 - 14.15

**Plats: **
D2

**Medverkande: **
Dan Spielman (Yale University)

Discrepancy theory tells us that it is possible to divide a set of vectors into two sets that look surprisingly similar to each other. In particular, these sets can be much more similar to each other than those produced by a random division. The development of discrepancy theory has been motivated by applications in fields including combinatorics, geometry, optimization, and functional analysis. But, we expect its greatest impact will be in the design of randomized controlled trials.

Randomized controlled trials are used to test the effectiveness of interventions, like medical treatments and educational innovations. Randomization is used to ensure that the test and control groups are probably similar. If we know nothing about the experimental subjects, a random division of subjects into test and control groups is the best choice. But, when we do have prior information about the experimental subjects, we can produce random divisions of low discrepancy that lead to more accurate estimates of the effectiveness of treatments.

Until a breakthrough of Bansal in 2010, the major algorithmic problems of discrepancy theory were thought to be computationally intractable. We now know efficient algorithms that solve many discrepancy problems, and the development of these algorithms has led to many new discoveries in discrepancy theory.

In these talks, we will survey the major mathematical and algorithmic results of discrepancy theory, and explain how discrepancy theory can be used to improve randomized controlled trials.