About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: email@example.com
In the Pipeline:
Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline
August 22, 2014
Science has an article by journalist Ken Garber on palbociclib, the Pfizer CDK4 compound that came up here the other day when we were discussing their oncology portfolio. You can read up on the details of how the compound was put in the fridge for several years, only to finally emerge as one of the company's better prospects. The roots of the project go back to about 1995 at Parke-Davis:
Because the many CDK family members are almost identical, “creating a truly selective CDK4 inhibitor was very difficult,” says former Parke-Davis biochemist Dave Fry, who co-chaired the project with chemist Peter Toogood. “A lot of pharmaceutical companies failed at it, and just accepted broad-spectrum CDK inhibitors as their lead compounds.” But after 6 years of work, the pair finally succeeded with the help of some clever screens that could quickly weed out nonspecific “dirty” compounds.
Their synthesis in 2001 of palbociclib, known internally as PD-0332991, was timely. By then, many dirty CDK inhibitors from other companies were already in clinical trials, but they worked poorly, if at all. Because they hit multiple CDK targets, these compounds caused too much collateral damage to normal cells. . .Eventually, most efforts to fight cancer by targeting the cell cycle ground to a halt. “Everything sort of got hung up, and I think people lost enthusiasm,” Slamon says.
PD-0332991 fell off the radar screen. Pfizer, which had acquired Warner-Lambert/Parke-Davis in 2000 mainly for the cholesterol drug Lipitor, did not consider the compound especially promising, Fry says, and moved it forward haltingly at best. “We had one of the most novel compounds ever produced,” Fry says, with a mixture of pride and frustration. “The only compound in its class.”
A major merger helped bury the PD-0332991 program. In 2003, Pfizer acquired Swedish-American drug giant Pharmacia, which flooded Pfizer's pipeline with multiple cancer drugs, all competing for limited clinical development resources. Organizational disarray followed, says cancer biologist Dick Leopold, who led cancer drug discovery at the Ann Arbor labs from 1989 to 2003. “Certainly there were some politics going on,” he says. “Also just some logistics with new management and reprioritization again and again.” In 2003, Pfizer shut down cancer research in Ann Arbor, which left PD-0332991 without scientists and managers who could demand it be given a chance, Toogood says. “All compounds in this business need an advocate.”
So there's no doubt that all the mergers and re-orgs at Pfizer slowed this compound down, and no doubt a long list of others, too. The problems didn't end there. The story goes on to show how the compound went into Phase I in 2004, but only got into Phase II in 2009. The problem is, well before that time it was clear that there were tumor types that should be more sensitive to CDK4 inhibition. See this paper from 2006, for example (and there were some before this as well).
It appears that Pfizer wasn't going to develop the compound at all (thus that long delay after Phase I). They made it available as a research tool to Selina Chen-Kiang at Weill Cornell, who saw promising results with mantle cell lymphoma, then Dennis Slamon and RIchard Finn at UCLA profiled the compound in breast cancer lines and took it into a small trial there, with even more impressive results. And at this point, Pfizer woke up.
Before indulging in a round of Pfizer-bashing, though, It's worth remembering that stories broadly similar to this are all too common. If you think that the course of true love never did run smooth, you should see the course of drug development. Warner-Lambert (for example) famously tried to kill Lipitor more than once during its path to the market, and it's a rare blockbuster indeed that hasn't passed through at least one near-death-experience along the way. It stands to reason: since the great majority of all drug projects die, the few that make it through are the ones that nearly died.
There are also uncounted stories of drugs that nearly lived. Everyone who's been around the industry for a while has, or has heard, tales of Project X for Target Y, which was going along fine and looked like a winner until Company Z dropped for Stupid Reason. . .uh, Aleph. (Ran out of letters there). And if only they'd realized this, that, and the other thing, that compound would have made it to market, but no, they didn't know what they had and walked away from it, etc. Some of these stories are probably correct: you know that there have to have been good projects dropped for the wrong reasons and never picked up again. But they can't all be right. Given the usual developmental success rates, most of these things would have eventually wiped out for some reason. There's an old saying among writers that the definition of a novel is a substantial length of narrative fiction that has something wrong with it. In the same way, every drug that's on the market has something wrong with it (usually several things), and all it takes is a bit more going wrong to keep it from succeeding at all.
So where I fault Pfizer in all this is in the way that this compound got lost in all the re-org shuffle. If it had developed more normally, its activity would have been discovered years earlier. Now, it's not like there are dozens of drugs that haven't made it to market because Pfizer dropped the ball on them - but given the statistics, I'll bet that there are several (two or three? five?) that could have made it through by now, if everyone hadn't been so preoccupied with merging, buying, moving, rearranging, and figuring out if they were getting laid off or not.
The good thing is that other companies stepped into the field on the basis of those earlier publications, and found CDK4/6 inhibitors of their own (notably Novartis and Lilly). This is why I think that huge mergers hurt the intellectual health of the drug industry. Take it to the reducio ad not all that absurdum of One Big Drug Company. If we had that, and only that, then whole projects and areas of research would inevitably get shelved, and there would be no one left to pick them up at all. (I'll also note, in passing, that should all of the CDK inhibitors make it to market, that there will be yahoos who decry the whole thing as nothing but a bunch of fast-follower me-too drugs, waste of time and money, profits before people, and so on. Watch for it.)
+ TrackBacks (0) | Category: Cancer | Drug Development | Drug Industry History
August 21, 2014
So here's a question for the medicinal chemists: how come we don't like bromoaromatics so much? I know I don't, but I have trouble putting my finger on just why. I know that there's a ligand efficiency argument to be made against them - all that weight, for one atom - but there are times when a bromine seems to be just the thing. There certainly are such structures in marketed drugs. Some of the bad feelings around them might linger from the sense that it's sort of unnatural element, as opposed to chlorine, which in the form of chloride is everywhere in living systems.
But bromide? Well, for what it's worth, there's a report that bromine may in fact be an essential element after all. That's not enough to win any arguments about putting it into your molecules - selenium's essential, too, and you don't see people cranking out the organoselenides. But here's a thought experiment: suppose you have two drug candidate structures, one with a chlorine on an aryl ring and the other with a bromine on the same position. If they have basically identical PK, selectivity, preliminary tox, and so on, which one do you choose to go on with? And why?
If you chose the chloro derivative (and I think that most medicinal chemists instinctively would, for just the same hard-to-articulate reasons we're talking about), then what split in favor of the bromo compound would be enough to make you favor it? How much more activity, PK coverage, etc. do you need to make you willing to take a chance on it instead?
+ TrackBacks (0) | Category: Drug Development | Odd Elements in Drugs | Pharmacokinetics | Toxicology
Edward Zartler ("Teddy Z" of the Practical Fragments blog) has a short piece in the latest ACS Medicinal Chemistry Letters on fragment-based drug discovery. He applies the term "fragonomics" to the field (more on this in a moment), and provides a really useful overview of how it should work.
One of his big points is that fragment work isn't so much about using smaller-than-usual molecules, as it is using molecules that make only good interactions with the target.. It's just that smaller molecules are far more likely to achieve that - a larger one will have some really strong interactions, along with some things that actually hurt the binding. You can start with something large and hack pieces of it off, but that's often a difficult process (and you can't always recapitulate the binding mode, either). But if you have a smaller piece that only makes a positive interaction or two, then you can build out from that, tiptoeing around the various landmines as you go. That's the concept of "ligand efficiency", without using a single equation.
He also emphasizes that having a simpler molecule to work on means that the SAR can be tested and expanded quickly, often without anyone hitting the lab bench at all. You can order things up from the vendors or raid your own screening collection for close analogs. This delays the entry of the medicinal chemists to the project, which (considering that their time is always in demand) is a feature to be happy about.
The article ends up by saying that "Fragonomics has won the field. . .The age of the medchemist is over; now is the time of the biophysicist." I don't know if that's quite the way to win friends and influence people, though. Medicinal chemists are rather sensitive to threats to their existence (with good reason), so my worry is that coming on like this will make chemists who haven't tried it leery of fragment-based drug design in general. I'm also not thrilled with "fragonomics" as a term (just as I'm not thrilled with most of the newly-coined "omics" terms). The word doesn't add anything; it's just a replacement for having to say "fragment-based drug discovery" or "FBDD" all the time. It's not that we don't need a replacement for the unwieldy phrase - it's just that I think that many people might (by now) be ready to dismiss anything that's had "omics" slapped on it. I wish I had something better to offer, but I'm coming up blank myself.
+ TrackBacks (0) | Category: Drug Assays
August 20, 2014
Perseverance is a critical variable in drug discovery. Too little of it, and you are absolutely guaranteed to fail - no drug has ever made it to market without trying the patience of everyone involved. Too much of it, and you are very nearly guaranteed to waste all your money: most drug development projects don't work, and eventually reach a point where no amount of time or money could make them work, either. Many are the efforts where leaders have gritted their teeth, redoubled their efforts, and led everyone further into the abyss.
But sometimes these things come through, and that's what seems to have happened with Amicus and their drug migalastat for Fabry's. It's a protein chaperone, one the the emerging class of drugs that work by stabilizing particular protein conformations to help regain function. At the end of 2012, Amicus and their partner GSK announced clinical trial results that didn't meet significance, which prompted GlaxoSmithKline to return rights to the drug to Amicus.
Who kept on with it. And who announced today that the second Phase III study had come back positive, enough so that they plan to file for regulatory approval. (The belief is that the first Phase III enrolled an inappropriate mix of patients). Congratulations to the company, who may well have given many Fabry's patients their first opportunity for an oral therapy for their disease.
+ TrackBacks (0) | Category: Clinical Trials
John LaMattina has a look at Pfizer's oncology portfolio, and what their relentless budget-cutting has been doing to it. The company is taking some criticism for having outlicensed two compounds (tremelimumab to AstraZeneca and neratinib to Puma) which seem to be performing very well after Pfizer ditched them. Here's LaMattina (a former Pfizer R&D head, for those who don't know):
Unfortunately, over 15 years of mergers and severe budget cuts, Pfizer has not been able to prosecute all of the compounds in its portfolio. Instead, it has had to make choices on which experimental medicines to keep and which to set aside. However, as I have stated before, these choices are filled with uncertainties as oftentimes the data in hand are far from complete. But in oncology, Pfizer seems to be especially snake-bit in the decisions it has made.
That goes for their internal compounds, too. As LaMattina goes one to say, palbociclib is supposed to be one of their better compounds, but it was shelved for several years due to more budget-cutting and the belief that the effort would be better spent elsewhere. It would be easy for an outside observer to whack away at the company and wonder how incompetent they could be to walk away from all these winners, but that really isn't fair. It's very hard in oncology to tell what's going to work out and what isn't - impossible, in fact, after compounds have progressed to a certain stage. The only way to be sure is to take these things on into the clinic and see, unfortunately (and there you have one of the reasons things are so expensive around here).
Pfizer brought up more interesting compounds than it later was able to develop. It's a good question to wonder what they could have done with these if they hadn't been pursuing their well-known merger strategy over these years, but we'll never know the answer to that one. The company got too big and spent too much money, and then tried to cure that by getting even bigger. Every one of those mergers was a big disruption, and you sometimes wonder how anyone kept their focus on developing anything. Some of its drug-development choices were disastrous and completely their fault (the Exubera inhaled-insulin fiasco, for example), but their decisions in their oncology portfolio, while retrospectively awful, were probably quite defensible at the time. But if they hadn't been occupied with all those upheavals over the last ten to fifteen years, they might have had a better chance on focusing on at least a few more of their own compounds.
Their last big merger was with Wyeth. If you take Pfizer's R&D budget and Wyeth's and add them, you don't get Pfizer's R&D post-merger. Not even close. Pfizer's R&D is smaller now than their budget was alone before the deal. Pyrrhus would have recognized the problem.
+ TrackBacks (0) | Category: Business and Markets | Cancer | Drug Development | Drug Industry History
August 19, 2014
Here's a very good review article in J. Med. Chem. on the topic of protein binding. For those outside the field, that's the phenomenon of drug compounds getting into the bloodstream and then sticking to one or more blood proteins. Human serum albumin (HSA) is a big player here - it's a very abundant blood protein that's practically honeycombed with binding sites - but there are several others. The authors (from Genentech) take on the disagreements about whether low plasma protein binding is a good property for drug development (and conversely, whether high protein binding is a warning flag). The short answer, according to the paper: neither one.
To further examine the trend of PPB for recently approved drugs, we compiled the available PPB data for drugs approved by the U.S. FDA from 2003 to 2013. Although the distribution pattern of PPB is similar to those of the previously marketed drugs, the recently approved drugs generally show even higher PPB than the previously marketed drugs (Figure 1). The PPB of 45% newly approved drugs is >95%, and the PPB of 24% is >99%. These data demonstrate that compounds with PPB > 99% can still be valuable drugs. Retrospectively, if we had posed an arbitrary cutoff value for the PPB in the drug discovery stage, we could have missed many valuable medicines in the past decade. We suggest that PPB is neither a good nor a bad property for a drug and should not be optimized in drug design.
That topic has come up around here a few times, as could be expected - it's a standard med-chem argument. And this isn't even the first time that a paper has come out warning people that trying to optimize on "free fraction" is a bad idea: see this 2010 one from Nature Reviews Drug Discovery.
But it's clearly worth repeating - there are a lot of people who get quite worked about about this number - in some cases, because they have funny-looking PK and are trying to explain it, or in some cases, just because it's a number and numbers are good, right?
+ TrackBacks (0) | Category: Drug Assays | Drug Development | Pharmacokinetics
How many ways do we have to differentiate samples of closely related compounds? There's NMR, of course, and mass spec. But what if two compounds have the same mass, or have unrevealing NMR spectra? Here's a new paper in JACS that proposes another method entirely.
Well, maybe not entirely, because it still relies on NMR. But this one is taking advantage of the sensitivity of 19F NMR shifts to molecular interactions (the same thing that underlies its use as a fragment-screening technique). The authors (Timothy Swager and co-workers at MIT) have prepared several calixarene host molecules which can complex a variety of small organic guests. The host structures feature nonequivalent fluorinated groups, and when another molecule binds, the 19F NMR peaks shift around compared to the unoccupied state. (Shown are a set of their test analytes, plotted by the change in three different 19F shifts).
That's a pretty ingenious idea - anyone who's done 19F NMR work will hear about the concept and immediately say "Oh yeah - that would work, wouldn't it?" But no one else seems to have thought of it. Spectra of their various host molecules show that chemically very similar molecules can be immediately differentiated (such as acetonitrile versus propionitrile), and structural isomers of the same mass are also instantly distinguished. Mixtures of several compounds can also be assigned component by component.
This paper concentrates on nitriles, which all seem to bind in a similar way inside the host molecules. That means that solvents like acetone and ethyl acetate don't interfere at all, but it also means that these particular hosts are far from universal sensors. But no one should expect them to be. The same 19F shift idea can be applied across all sorts of structures. You could imagine working up a "pesticide analysis suite" or a "chemical warfare precursor suite" of well-chosen host structures, sold together as a detection kit.
This idea is going to be competing with LC/MS techniques. Those, when they're up and running, clearly provide more information about a given mixture, but good reproducible methods can take a fair amount of work up front. This method seems to me to be more of a competition for something like ELISA assays, answering questions like "Is there any of compound X in this sample?" or "Here's a sample contaminated with an unknown member of Compound Class Y. Which one is it?" The disadvantage there is that an ELISA doesn't need an NMR (with a fluorine probe) handy.
But it'll be worth seeing what can be made of it. I wonder if there could be host molecules that are particularly good at sensing/complexing particular key functional groups, the way that the current set picks up nitriles? How far into macromolecular/biomolecular space can this idea be extended? If it can be implemented in areas where traditional NMR and LC/MS have problems, it could find plenty of use.
+ TrackBacks (0) | Category: Analytical Chemistry
August 18, 2014
I spent the morning in the lab pretty much destroying whatever I touched: wrong solvents for chromatography, dropping things in the sink, bumping solutions all over the inside of my rota-vap. This is, though, a Monday, so at least I have that to blame. But if everyone started out the week the way I did, then scientific progress came to a juddering halt around 11 AM EST. My hope is that I can be less of a wrecking ball during the rest of the day and start working my way back into positive territory.
+ TrackBacks (0) | Category: Life in the Drug Labs
Here's a look back at the beginnings of ChemDraw, and you won't be surprised to hear that its origins go back to someone (Dave Evans' wife!) who'd had way too much of the old-fashioned style of structure drawing.
As I've mentioned here before, my grad school experience ended up being timed to experience both worlds. For my second-year continuation exam, I had to do the structures the classic way: green plastic template to make the chair and boat cyclohexanes all come out the same, rub-on letters for the atoms. If you wanted to copy a structure, well, you went down to the copier and you copied that structure. And you Frankensteined each scheme together with tape (matte, not shiny) or glue stick to make The Final Copy, rolling it into the typewriter to put in the captions and the text over the arrows. As I've always said, it was, in retrospect, not too far off from incising a buffalo-dung tablet with a sharpened stick and leaving it in the sun to dry.
It was a lot closer to that then it was to ChemDraw, that's for sure. (The sharpened stick would have worked pretty well with those rub-on letter transfers). And this is exactly what happened every time an organic chemist saw it in action:
The program developed little by little in this manner, with Sally channeling the needs of chemists and Rubenstein doing the programming. In July of 1985, ChemDraw premiered at the Gordon Research Conference on Reactions & Processes in New Hampshire. Rubenstein and the Evanses demonstrated it during a break in the conference. Bad weather kept the conferees indoors, so attendance was high.
Stuart L. Schreiber, then a chemistry professor at Yale University, saw the demo and recalls “knowing instantly that my prized drafting board and my obsessive drafting of chemical formulas were over.”
Schreiber holds the distinction of being the first person to purchase ChemDraw. “The impact of seeing ChemDraw on a Macintosh computer was dramatic and immediate,” he says. “There was no doubt that this was going to change the way chemists interact with each other and the rest of the scientific community,” he says. At the time Schreiber was proudly using his Xerox Memorywriter electronic typewriter with two lines of editable text. “The combination of the Macintosh computer and ChemDraw clearly demanded next-day adoption.” He rushed home to New Haven and placed his order.
That's just how it went. Every organic chemist who saw the program in action immediately wanted it; the superiority of the program to any of the manual methods was immediately and overwhelmingly obvious. You hear similar stories about people's reactions to the first spreadsheet program (VisiCalc) in the late 1970s, and for exactly the same reasons. Advances like these need no sales pitch at all - you could demo such things in complete silence for five minutes and people would line up with their money. I can remember seeing ChemDraw for the first time when I was at Duke, and being stunned by the idea of copying and pasting structures, resizing them, rotating them, joining them together, and (especially) saving the damned things for later.
So for my dissertation, which I started writing in late 1987, it was Word (3.02!) and ChemDraw all the way, and I was the first person in Duke's chemistry department to solo with those two for the PhD writeup. I did some of it on a Mac Plus and a lot of it on Mac SEs, switching floppy disks in and out. There was a Mac II down the hall, with a color screen and a 20 MB hard drive, and I really felt like I was on the cutting edge when I used that one. My lone disk with the manuscript in progress went unreadable and unrecoverable after two weeks of intermittent work, which taught me a lifelong lesson about making backups. Although it was a major pain to keep it up, I ended (with not-so-unusual grad student paranoia) by keeping five copies at all times: the current working copy, an extra one in the desk drawer in my lab, one back by my bench, one over in my apartment, and one in the glove compartment of my car.
My PhD advisor was not a computer user himself at the time, though, which led to an interesting scene when I did hand the manuscript over to him some months later (which process was an interesting story in itself, for another time). He got it back to me with a large number of hand-marked corrections, but as I flipped through the pages I realized that almost all of them were the same corrections, flagged every time that they appeared. I saw him that afternoon, and he asked if I'd seen his changes. I had, I told him, and I'd made al the corrections. He looked at me, puzzled, so I told him about the "Find and Replace" command, and he raised his eyebrows and said "That's very. . .convenient, isn't it?" "Sure is," I badly wanted to say. "Welcome to the fun-filled late 20th century, boss. Let's see, what else. . .we landed on the moon in '69. Oh, the Beatles broke up. And. . ."
But I didn't say any of that, of course. You don't go around saying things like that to your professor, especially when you're in the final stages of writing up, not unless you want to face the choice of going back to the lab for a couple more years or asking people if they'd like the Value Meal. No, facing your committee is preferable in every way.
+ TrackBacks (0) | Category: Graduate School
August 15, 2014
There's a post by Peter Bach, of the Center for Health Policy and Outcomes, that's been getting a lot of attention the last few days. It's called "Unpronounceable Drugs, Incomprehensible Prices", and you know what it says.
No, really, you do, even if you haven't seen it. Too high, unconscionable, market can't support, what they can get away with, every year, too high. Before I get to the uncomfortable parts of my own take on this, let me stipulate a couple of things up front: (1) I do think that the industry is inviting trouble for itself by the way it it raising prices. It is in drug companies' short term interest to do so, but long term I worry that it's going to bring on some sort of price-control regimen. (2) Some drug prices probably are too high (but see below for what that means). Big breakthroughs can, at least in theory, command high prices, but not everything deserves to be priced at the level it is.
I was about to say "see below" again, but this paragraph is below, so here goes. Let me quote a bit from Bach's article:
Cancer drug prices keep rising. The industry says this reflects the rising costs of drug development and the business risks they must take when testing new drugs. I think they charge what they think they can get away with, which goes up every year. . .Regardless of the estimate, the pricing of new drugs for cancer and now other common diseases has come unglued from the rationale the industry has long espoused. Instead, pricing is explained by a phenomenon of increasing boldness by the industry against a backdrop of regulators and insurers who have no legal authority to dictate or even propose alternative pricing models.
Bach's first assertion is correct: drug companies are charging what they think they can get away with. In that, they are joined by pretty much every other business in the entire country. I