Innogen Institute, University of Edinburgh; CASMI, University of Oxford; JW Scannell Analytics LTD.View Slides
Why has drug discovery become less efficient as the technologies that most people think are important have improved spectacularly? DNA sequencing is perhaps ten billion times faster now than it was in the 1970s. A single chemist today may make 1,000 times more molecules to test than they could in the early 1980s. X-ray crystallography is perhaps ten thousand times quicker now than it was in the mid-1960s. Almost every research technique is faster or cheaper—or faster and cheaper—than ever. And, there are entirely new techniques that were unavailable to previous scientific generations—like transgenic mice or computer-based virtual models. Yet for every inflation-adjusted dollar spent on drug research in 1950, in 2010 one needed to spend more than $100 to achieve similar success.
Our work, applying tools from statistical decision theory to the drug R&D process, suggests an answer. The validity of screening and disease models (i.e., the degree to which their results predict subsequent results in man) must have declined as their brute force efficiency has increased. Compare drug discovery to fitting out a speedboat to find a small island in a big ocean. If one invests too much in the engine (brute-force efficiency) and too little in the compass (validity), efficiency declines because the boat spends its time heading at great speed in the wrong direction. We suspect that the validity of screening and disease models has fallen for two reasons. First, the best models yield cures, become redundant, so are retired. This leaves uncured diseases, with their bad models, which people continue to use for want of anything better. Second, there may have been too much industrial and academic fashion for simplistic molecular models with low validity. We think that the rate of creation and identification of valid models may be the major constraint on R&D productivity.