By James Surowiecki
The deadly hemorrhagic fever Ebola was first discovered in 1976, and it has haunted the public imagination for twenty years, ever since the publication of Richard Preston’s “The Hot Zone.” Yet, in all that time, no drug has ever been approved to treat the disease. Now the deadliest outbreak yet is raging in West Africa, and there are no real tools to stop it. (Supplies of the experimental drug administered to two American patients have already run out.) The lack of an Ebola treatment is disturbing. But, given the way drug development is funded, it’s also predictable.
When pharmaceutical companies are deciding where to direct their R. & D. money, they naturally assess the potential market for a drug candidate. That means that they have an incentive to target diseases that affect wealthier people (above all, people in the developed world), who can afford to pay a lot. They have an incentive to make drugs that many people will take. And they have an incentive to make drugs that people will take regularly for a long time–drugs like statins.
This system does a reasonable job of getting Westerners the drugs they want (albeit often at high prices). But it also leads to enormous underinvestment in certain kinds of diseases and certain categories of drugs. Diseases that mostly affect poor people in poor countries aren’t a research priority, because it’s unlikely that those markets will ever provide a decent return. So diseases like malaria and tuberculosis, which together kill two million people a year, have received less attention from pharmaceutical companies than high cholesterol. Then, there’s what the World Health Organization calls “neglected tropical diseases,” such as Chagas disease and dengue; they affect more than a billion people and kill as many as half a million a year. One study found that of the more than fifteen hundred drugs that came to market between 1975 and 2004 just ten were targeted at these maladies. And when a disease’s victims are both poor and not very numerous that’s a double whammy. On both scores, a drug for Ebola looks like a bad investment: so far, the disease has appeared only in poor countries and has affected a relatively small number of people.
It’s not just developing nations that the system disserves, however. In recent years, the rise of drug-resistant microbes has made the antibiotics we use less effective and has increased the risk that an infectious disease could get out of control. What people in the West need, health officials agree, is new drugs that we can keep in reserve against an outbreak that regular antibiotics can’t contain. Yet, over the past thirty years, the supply of new antibiotics has slowed to a trickle. “Antibiotic resistance really has the potential to make everything about the way we live different,” Kevin Outterson, a co-director of the Health Law program at Boston University and a founding member of the C.D.C.’s working group on antimicrobial resistance, told me. “So we need to stoke the pipeline.”
The trouble, again, is the business model. If a drug company did invent a powerful new antibiotic, we wouldn’t want it to be widely prescribed, because the goal would be to delay resistance. “Public-health officials would appropriately try to limit sales of the drug as much as possible,” Outterson says: a good public-health policy; a bad investment prospect.
So how can we get the drugs we need without magically transforming the industry that develops them? The key is to reward companies for creating substantial public-health benefits. And the simplest way to do this would be to offer prizes for new drugs. Outterson describes one scenario: “The government would make a payment or a stream of payments to the company, and in exchange the company would give up the right to sell the product.” The drug company would get paid, and would avoid all the expenses of trying to push a new product (which you don’t want with a last-resort antibiotic, anyway). Society would get a new drug, and public-health officials would be able to control how it was promoted and used.
Prizes aren’t a new idea–in the seventeen-hundreds, the British government successfully used a prize to find a method for measuring longitude at sea. But, in the past couple of decades, they’ve become more common, with prizes being offered for things like innovations in private space flight and an arsenic filter for safe drinking water. The Obama Administration has been especially active in this area, offering more than a hundred and fifty prizes for a range of technological breakthroughs. Economists on both the left and the right see them as a useful way to spark innovation. They’re cost-effective, since you have to pay only if the product works. They’re well suited to encouraging investment in public goods–like antibiotics and vaccines–where the benefits of an innovation aren’t reaped only by those who use it. (My family is safer if yours is vaccinated.) They rely on existing infrastructure. And, in economic jargon, they harness market forces by “pulling” research into neglected areas.
The up-front costs of a prize system would be substantial–a recent report commissioned by the F.D.A. estimated that it would cost a billion dollars to get a great new antibiotic, factoring in tax credits. But we’d save lives by developing the drugs we need and taking measures against future disaster. The alternative is pretty grim: a system that, when it comes to some fierce mortal perils, is leaving a lot of blood on the floor.
The AfricaPaper: James Surowiecki has been a staff writer at The New Yorker since 2000, and writes The Financial Page.