Impact and Feedback Loops

January 2025

In high school, I wanted to be a physicist. I loved solving problems and would stay up late to watch documentaries on particle physics and cosmology. At university, I gravitated towards theoretical physics but it became increasingly clear that the field was moving too slowly. Case in point: the Higgs boson, proposed in 1964, was experimentally confirmed in 2012 — a whopping 48 years! Wondering where else physics could be useful, I got excited by its applications in biology and medicine where it could help save lives. Humanity also appeared to be at the beginning of a biotech revolution. After pivoting to biophysics, I studied biotech and set my sights on machine learning for drug discovery (MLDD). Fast forward to 2025: I’d worked at a large pharma and a biotech startup. I was exactly where I wanted, working on cutting-edge MLDD (mRNA vaccines, diffusion models, large language models, etc.) to defeat some of the deadliest diseases. Alas, my fulfillment didn’t last. I trained ML models that performed well on benchmarks but there was never an unequivocal way to test if the models were useful. The absence of tangible impact disheartened me despite the importance of the mission. My m.o. of prioritizing cool work over impact had eroded. I was impatient for impact.

Feedback Loops

The word “impatient” is a bit imprecise, so I’ll use the concept of a feedback loop, which I define broadly as a process where the output of a system determines the input of future operations. The best feedback loops are fast (impact is seen quickly) and actionable (it is clear how this impact should change future behavior) because they enable faster iteration and convergence to a goal [1]. Feedback loops in MLDD are often neither [2]. They usually consist of (1) an ML model that predicts a quantity of interest (e.g. a drug’s brain penetrance), and (2) an experiment that evaluates the quality of that prediction. These experiments are crucial because high scores on benchmarks often don’t translate into actual usefulness. Unfortunately, they are usually too difficult, too expensive, or not allowed for safety reasons. As a result, the ML models cannot be properly evaluated and thus improved: the feedback loops aren’t actionable. In contrast, you don’t need an experiment to tell you whether ChatGPT is useful: you simply read its output and judge for yourself. Looking back, my pivots make sense: theoretical physics is bottlenecked by unfeasible experiments, leading to slow feedback loops; the feedback loops in MLDD aren’t actionable and are also slow (and we haven’t even mentioned the bureaucracy of clinical trials). I knew from my studies that drug discovery is risky and long-term, but I hadn’t experienced the day-to-day frustration that this could lead to.

Restlessly Eager

The lesson is clear: my personality doesn’t align with the slow and non-actionable feedback loops of MLDD. This likely means I have to leave the field. I am thankful for all the smart people working hard to shift the needle in MLDD. We need them. Their success could be the greatest unambiguous improvement to human life. I think I am a good kind of impatient, more akin to being restlessly eager. I want to work on problems that have fast and actionable feedback loops. Problems where the impact is measured in days and weeks rather than months and years.

[1] The faster the feedback loop, the more room there is to explore because impact can be tested more quickly. [2] Slow and/or non-actionable feedback loops are sometimes inevitable for scientific discoveries, technological breakthroughs, and art.