About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: email@example.com
In the Pipeline:
Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline
March 11, 2014
Many of you will have seen the stories of a dying 7-year-old whose parents are seeking compassionate use access to a drug being developed by Chimerix. It's hard reading for a parent, or for anyone.
But I can do no better than echo John Carroll's editorial here What it comes down to, as far as I can see, is that a company this size will go bankrupt if it tries to deal with all these requests. So under the current system, we have a choice: let small companies try to discover drugs like this, without granting access, or wipe them out by making them grant it. Even for large companies, it's rough, as I wrote about here. I don't have a good solution.
+ TrackBacks (0) | Category: Drug Development
That's the take-home of this post by Adam Feuerstein about La Jolla Pharmaceuticals and their kidney drug candidate GCS-100. It's a galectin inhibitor, and it's had a rough time in development. But investors in the company were cheered up a great deal by a recent press release, stating that the drug had shown positive effects.
But look closer. The company's bar-chart presentation looks reasonably good, albeit with a binary dose-response (the low dose looks like it worked; the high dose didn't). But scroll down on that page to see the data expressed as means with error bars. Oh dear. . .
Update: it's been mentioned in the comments that the data look better with standard error rather than standard deviations. Courtesy of a reader, here's the graph in that form. And it does look better, but not by all that much:
+ TrackBacks (0) | Category: Business and Markets | Clinical Trials
March 10, 2014
Bruce Booth has opened up the number of authors who will be posting at LiveSciVC, and there's an interesting post up on startups now
from Atlas Ventures' Mike Gilman. Edit: nope, my mistake. This is Bruce Booth's! Here are some of his conclusions:
here’s a list of a few of the perceived advantages of Pharma R&D today:
Almost unlimited access to all the latest technologies across drug discovery, ADME, toxicology, and clinical development, including all the latest capital equipment, compound libraries, antibody approaches, etc
International reach to support global clinical and regulatory processes to fully enable drug development programs
Deep and insightful commercial input into the markets, the pulse of the practicing physician, and the payors on what’s the right product profile
Gigantic cash flow streams that provide 15-20% of the topline to support a largely “block grant” model of R&D (fixing R&D spend to the percentage of sales)
Decades of institutional memory providing the scar tissue around what works and what doesn’t (e.g., insight into project attrition at massive scale)
This is a solid list of advantages, and they all have real merit.
But like the biblical Goliath, whose size and strength appeared to the Israelites as great advantages, they are also the roots of Pharma’s disadvantages. All of these derive their value as inward and relatively insular forces. Institutional memory in particular can serve to either unlock better paths to innovation or to stifle those that want to explore new ways of doing things. Lipinski’s Rules, hERG liabilities, and other candidate guidelines derived from legacy “survivor bias”-style analyses are case examples of this tension – unfortunately the stifling aspects rather than the unlocking ones often triumph in big firms.
Further, these impressive corporate R&D “advantages” are of course the product of Big Pharma’s path-dependency: single blockbuster successes discovered in the ‘60s-70s led to early mergers in the ‘80-90s, and bigger mega-mergers in the late 90s-00s, to form the organizations of today. Bigger and bigger R&D budgets buying up more and more “things” in the quest for improved productivity. In a sense, the growth drivers underlying these mergers acted like the excessive hGH coming from Goliath’s pituitary – the scale and constant growth pressure was a product of a disease, not a design.
He makes the point earlier on that constraints on spending, while they may not feel like a good thing, may actually be one. More money and resources often leads to box-checking behavior and a feeling of "Since we can do this, we should". There's some institutional political stuff going on there, of course - if you've checked off all the boxes that everyone agrees are needed for success, and you still don't succeed, then it can't be your fault. Or anyone's. That's not to say that all failures have to be someone's fault, but this sort of thing obscures those times when there's actual blame to go around.
The post also goes into another related problem: if you have all these resources, that you've paid for (and are continuing to pay for to keep running), then if they're not being used, things look like they're being wasted. They probably are being wasted. So stuff gets shoveled on, to keep everything running at all times. It's certainly in the interest of the people in those areas to keep working (and to be seen to be keeping working). It's in the interest of the people who manage those areas, and of the ones who advocated for bringing in whatever process or technology. But these can be perverse incentives.
The main problem I have with the post is the opening analogy to the recent Mars mission launched by India. I have to salute the people behind the Mangalyaan mission - it's a real accomplishment, and if it works, India will be only the fourth nation (or group of nations) to reach Mars. But going on about how cheaply it was done compared to the simultaneous MAVEN mission from the US isn't a good comparison. Yes, the Indian mission is eight times cheaper. But it has one quarter the payload, and is targeted to last about half as long, and that's leaving out any consideration of the actual instrumental capabilities. It's also worth noting that the primary goal of the Mangalyaan mission is to demonstrate that India can pull it off; any data from Mars are (officially) secondary. I'd find the arguments about small and large Pharma more convincing without this comparison, to be honest.
But the larger point stands: if you had to start discovering drugs from scratch, knowing what's happened to other, larger organizations, are there things you would do differently? Emphasize more? Avoid altogether? A startup allows you to put these ideas into practice. Retrofitting them onto a larger, older company is nearly impossible.
+ TrackBacks (0) | Category: Business and Markets | Drug Industry History
Update: more doubts on the statistical power behind this, and the coverage it's getting in the press.
There's word of a possible early diagnostic blood test for Alzheimer's. A large team (mostly from Georgetown and Rochester) has published a paper in Nature Medicine on their search for lipid-based markers of incipient disease. They say that they have a ten-lipid panel that has a 90% success rate in predicting cognitive decline within three years.
I can certainly see how this would be possible - lipids could be markers of membrane trouble and myelin trouble, and we already know that the lipoprotein ApoE4 is linked with Alzheimer's. At the same time, I'd like to see how this looks when more data are available. The absolute number of patients showing the effect isn't large. And there's always a danger, on these biomarker fishing expeditions, of finding a spurious correlation. The fact that it takes ten lipids to get the accuracy up could be OK, or it could be a sign of statistical trouble. (It's a bit like seeing a QSAR model that needs ten parameters to be predictive).
But this could indeed be real, and if it is, a larger sample will nail it down. That should also give a much better idea of the false-positive and false-negative rates, which will be very important in a diagnosis like this. It'll also be interesting to see if the time horizon can be improved past three years. The usual worries about an Alzheimer's diagnostic apply - some people will want to know, and others won't, since there's no treatment. If this works out, though, it would also seem to be very useful for future clinical trials, which are (more and more) focusing on people in the earliest stages of the disease.
+ TrackBacks (0) | Category: Alzheimer's Disease
One of the questions I was asked after my talk at Illinois was about repurposing drugs. I replied that there might be some opportunities there, but I didn't think that there were many big ones that had been missed, unless new biology/target ID turned up. Well, here's a news story that contradicts that view of mine, and I'm welcome to be wrong this time.
Researchers in Manchester have been working on the use of lopinavir (an existing drug for HIV) as a therapy for HPV, the cause of most cervical cancers. There's a vaccine for it now, but that doesn't do much for women who are already diagnosed with probable or confirmed disease. But lopinavir therapy seems to do good, and plenty of it. A preliminary trial in Kenya has apparently shown a very high response rate, and they're now raising money for a larger (up to 1,000 patient) trial. I hope that it works out as it appears to - with any luck, HPV-driven disease will gradually disappear from the world in the coming decades, but there will be plenty of patients in the meantime.
As that Daily Telegraph article shows, it wasn't easy getting this work going, because of availability of the drug in the right formulation. Congratulations to the Manchester group and their collaborators in Kenya for being so persistent.
+ TrackBacks (0) | Category: Cancer | Clinical Trials | Infectious Diseases
March 7, 2014
Walensky and Bird have a Miniperspective out in J. Med. Chem. on stapled peptides, giving advice on how to increase one's chances of success in the area. Worth checking out, unless you're at Genentech or WEHI, of course. The authors might say that it's especially worth reading in those cases, come to think of it. I await the day when this dispute gets resolved, although a lot of people awaited the day that the nonclassical carbocation controversy got resolved, too, and look how long that took.
And in Science, Tehshik Yoon has a review on visible-light catalyzed photochemistry. I like these reactions a lot, and have run a few myself. The literature has been blowing up all over the place in this field, and it's good to have an overview like this to keep things straight.
+ TrackBacks (0) | Category: Chemical News
There's an interesting report from the Buchwald group using the Fujita "molecular sponge" crystallography technique. The last report on this was a correction, amid reports that the method was not as widely applicable as had been hoped, so I'm very happy to see it being used here.
They're revising the structure of a new reagent (from the Lu and Shen groups in Shanghai) for introducing the SCF3 group. It was proposed to be a hypervalent iodine (similar to other reagents in this class), but Buchwald's group found some NMR data and reactivity trends that suggested the structure might be in the open form, rather than the five-membered iodine ring one.
Soaking this reagent into the MOF crystal provided a structure, although if you read the supporting information, it wasn't easy. The compound was still somewhat disordered in the MOF lattice, and there were still nitrobenzene and cyclohexane solvent molecules present. The SCF3 reagent showed up in two crystallographically independent sites, one of them associated with residual nitrobenzene. After a good deal of work, though, they did show that open-form structure was present. (The Shen et al. paper's conclusions on its synthetic uses, though, are all still valid; it's just the the structure doesn't fall into the same series as expected).
So the MOF crystallography method lives, although I've still yet to hear of it giving a structure with a nitrogen-containing compound (which rather limits its use in drug discovery work, as you might imagine).
+ TrackBacks (0) | Category: Chemical News
March 6, 2014
FierceBiotech has their roundup of the top areas of the US for venture capital funding in biopharma in 2013, and naturally, San Francisco and the Boston area are fighting it out for the top spot. But Oakland the the east Bay region are in a separate category, and if those are wrapped into the rest of SF, the whole Bay area wins out by a pretty good margin.
But, as was pointed out by Nick Taylor on Twitter, San Francisco (not even counting Oakland, etc.) and Boston together raised more biopharma VC funding than all of Europe put together.(See slide 10 here). SF raised $1.15 billion, and Boston $0.93, while all of Europe was about $1.9 billion.
And then there's the rest of the Bay area, San Diego, Seattle, NY/NJ/Philadelphia, the Research Triangle and everyone else. In medical research, the US is the startup capital of the world.
+ TrackBacks (0) | Category: Business and Markets
There's been some (justified) hand-wringing in scientific publishing circles over the revelation that at least 120 abstracts and papers out there in the literature are complete nonsense generated by SciGen. (A few previous SciGen adventures can be found here and here) Some news reports have made it seem like these were regular full papers, but they're actually published conference proceedings which (frankly) are sort of the ugly stepchild of the science journal world to begin with. They're supposed to be reviewed, and they certainly should have been reviewed enough for someone to catch on to the fact that they were devoid of meaning, but if you're going to fill the pages of a reputable publisher with Aphasio-matic ramblings, that's the way to do it.
And these were reputable publishers, Springer and the IEEE. Springer has announced that they're removing all this stuff from their databases, since the normal retraction procedure doesn't exactly seem necessary. They're also trying to figure out what loophole let this happen in the first place, and they've contacted Cyril Labbé, the French researcher who wrote the SciGen-detecting software, for advice. The IEEE, for its part, has had this problem before, has had it for years, has been warned about it, but still seems to be ready and willing to publish gibberish. I don't know if Springer has had bad experiences with SciGen material, but the IEEE journals sure have, and it's apparently done no good at all. Live and don't learn. The organization has apparently removed the papers, but has made (as far as I can tell) no public statement whatsoever about the whole incident.
So who went to all this trouble, anyway? That Scholarly Kitchen link above has some speculations:
An additional (and even more disturbing) problem with the proceedings papers most recently discovered is emerging as the investigation continues: at least one of the authors contacted had no idea that he had been named as a coauthor. This suggests that the submissions were more than spoofs — spoofing can easily be accomplished by using fake names as well as fake content. The use of real scientists’ names suggests that at least some of these papers represent intentional scholarly fraud, probably with the intention of adding bulk to scholars’ résumés.
This takes us back to the open-access versus traditional publisher wars. When this sort of thing happens to OA journals, the response from some of the other publishers, overtly at times, is "Well, yeah, sure, that's what you get when you don't go with the name brand". And there's certainly a lot of weird crap that shows up - take a look at this thing, from the open access journal Cancer Research and Management, and see if you can make head or tail of it. Clozapine might have helped it a bit, but maybe not:
Our findings suggest that we are dealing with true reverse biologic system information in an activated collective cancer stem cell memory, in which physics participates in the elaboration of geometric complexes and chiral biomolecules that serve to build bodies with embryoid print as it develops during gestation.
That's a little more coherent than what SciGen will give you, but not so coherent as to, you know, make any sense. And if by any chance you were OK with that extract, the rest of the abstract mentions comets and the Large Hadron Collider, so take that. It lacks only gyres (don't we all?) But what this latest incident tells us is that this paper would have waltzed right in at plenty of other publishers, open-access or not. The authors should be aiming higher, y'know?
+ TrackBacks (0) | Category: The Scientific Literature
March 5, 2014
My visit to Illinois went well, and I had a very good time talking to the students and faculty here in Champaign/Urbana. But American Airlines has decided not to fly anyone to Chicago today, so I've had to round up alternative transportation and reschedule flights, which isn't going to leave much time for blogging, from the looks of it. So normal service, or what passes for it around here, will resume on Thursday!
Update: traveling interruption is right. I was supposed to fly out of Champaign at 8 AM, go through O'Hare, and land in Boston at 1:45. As mentioned above, that 8 AM flight disappeared, so I took a shuttle bus up to Chicago - which, because of the snow and thick traffic, ended up taking over four hours to get to ORD. But I was still in time for my rebooked flight - if American hadn't rebooked me for that flight departing Thursday instead of Wednesday. The guy at the airport desk was as puzzled as I was about why anyone would do that, but got me on direct flight to Boston leaving at 1:20. Which was delayed. And delayed again. I did finally make it back to Logan close to 6 PM, feeling a bit as if I'd made the trip back on a pogo stick! I know that ORD is capable of far more than this, though, and I'm glad that I escaped any worse fate.
+ TrackBacks (0) | Category: Blog Housekeeping
March 4, 2014
PD-1 therapies are a big, big deal in oncology these days, and with results like this, no wonder. It's a negative regulator of T-cell function, and blocking it appears to recruit a much stronger immune response to tumor cells. Bristol-Myers Squibb, Merck, and others have antibodies in the clinic, and results are piling up to suggest that these are going to be big.
The BMS entry, BMS-936558 (nivolumab), had already shown some promising Phase II results in non small-cell lung cancer, renal carcinoma, and colorectal cancer. Many patients don't respond, but the ones that do seem to show real benefit. (And it's worth noting that there are whole tumor types that don't necessarily respond - as far as I know, no one's gotten a PD-1 response in pancreatic cancer yet, which confirms its nastiness).
The new results are for metastatic melanoma, a famous impossible-to-treat condition. Kinase inhibitors like Zelboraf have shown some results, but not without problems, and the cancer always finds a way around and comes back. But this PD-1 antibody seems to have more long-lasting effects: the large study group (Dana-Farber, Johns Hopkins, Yale and more) on this paper report that of 107 patient treated, 33 showed actual tumor regressions. Overall, that is, even counting the ones that did not show as strong a response, medial overall survival rates were 62% after one year and 43% after two years, which is a real improvement. Average life expectancy at the start was one year. 17 patients discontinued therapy, but still continued to show response after the antibody dosing was halted, and the overall survival numbers strongly suggest that the treatment is having a real effect on new tumor formation and progression.
So the immunotherapy wave continues in oncology, and may well not have even crested yet. Let's hope it hasn't; this is good stuff.
+ TrackBacks (0) | Category: Cancer | Clinical Trials
Novartis has always been very, very quiet about their re-orgs and layoffs, to the point that when I and others write about them, we get hits from people at Novartis trying to find out what's going on. John Carroll at FierceBiotech has experience this as well, and he now has this story, from a reporter at the Swiss Tages-Anzeiger newspaper, that says that the company has actually eliminated between 3,000 and 4,000 jobs since last fall.
That comes to about a thousand in Europe (500 of them in Basel), 760 in the US (mostly in the sales force), and 400 during the closure of Horsham in the UK. My impression is that many of the others represent various manufacturing sites. Congratulations to the Zürich newspaper's Andreas Möckli for digging all this out - it could not have been easy. Here's my translation of the introductory paragraph:
It's a proven pattern for the drug company - Novartis is built up, worldwide, from various sites, but the whole extent of its actions are never communicated. While other companies inside (and outside) the industry announce their staff reductions with concrete numbers, Novartis is often content just confirming local media reports. At its Group level, the company has never announced any job cuts, even though jobs are lost in different locations. . .
+ TrackBacks (0) | Category: Business and Markets
March 3, 2014
Via Retraction Watch, here's an outspoken interview with Sydney Brenner, who's never been the sort of person to keep his opinions bottled up inside him. Here, for example, are his views on graduate school in the US:
Today the Americans have developed a new culture in science based on the slavery of graduate students. Now graduate students of American institutions are afraid. He just performs. He’s got to perform. The post-doc is an indentured labourer. We now have labs that don’t work in the same way as the early labs where people were independent, where they could have their own ideas and could pursue them.
The most important thing today is for young people to take responsibility, to actually know how to formulate an idea and how to work on it. Not to buy into the so-called apprenticeship. I think you can only foster that by having sort of deviant studies. That is, you go on and do something really different. Then I think you will be able to foster it.
But today there is no way to do this without money. That’s the difficulty. In order to do science you have to have it supported. The supporters now, the bureaucrats of science, do not wish to take any risks. So in order to get it supported, they want to know from the start that it will work. This means you have to have preliminary information, which means that you are bound to follow the straight and narrow.
I can't argue with that. In academia these days, it seems to me that the main way that something really unusual or orthogonal gets done is by people doing something else with their grant money than they told people they'd do. Which has always been the case to some extent, but I get the impression it's more so than ever. The article also quotes from Brenner's appreciation of the late Fred Sanger, where he made a similar point:
A Fred Sanger would not survive today’s world of science. With continuous reporting and appraisals, some committee would note that he published little of import between insulin in 1952 and his first paper on RNA sequencing in 1967 with another long gap until DNA sequencing in 1977. He would be labelled as unproductive, and his modest personal support would be denied. We no longer have a culture that allows individuals to embark on long-term—and what would be considered today extremely risky—projects.
Here are Brenner's mild, temperate views on the peer-review system and its intersection with academic publishing:
. . .I don’t believe in peer review because I think it’s very distorted and as I’ve said, it’s simply a regression to the mean.
I think peer review is hindering science. In fact, I think it has become a completely corrupt system. It’s corrupt in many ways, in that scientists and academics have handed over to the editors of these journals the ability to make judgment on science and scientists. There are universities in America, and I’ve heard from many committees, that we won’t consider people’s publications in low impact factor journals.
Now I mean, people are trying to do something, but I think it’s not publish or perish, it’s publish in the okay places [or perish]. And this has assembled a most ridiculous group of people. I wrote a column for many years in the nineties, in a journal called Current Biology. In one article, “Hard Cases”, I campaigned against this [culture] because I think it is not only bad, it’s corrupt. In other words it puts the judgment in the hands of people who really have no reason to exercise judgment at all. And that’s all been done in the aid of commerce, because they are now giant organisations making money out of it.
I don't find a lot to disagree with there, either. The big scientific publishers have some good people working for them, but the entire cause is more and more suspect. THere's a huge moral hazard involved, which we don't seem to be avoiding very well at all.
+ TrackBacks (0) | Category: General Scientific News | The Scientific Literature
That McKinsey piece I mentioned the other day, the one saying that big drug mergers have pretty much worked out just fine, has been pretty thoroughly torn apart in the comments section. A reader has sent along another report from one of their competitors in the high-priced consulting game, this overview from Deloitte.
Most readers won't find much in there that's new. But at least you won't find so many things in there that make you wonder what's in their water supply. And there are some non-rosy parts, for sure:
While, the U.S. Food and Drug Administration (FDA)
approved 39 new molecular entities in 2012, up from 31 in 2011 and the highest number since 199644 (Figure 4), a Deloitte United Kingdom Centre for Health Solutions analysis of the late-stage pipelines of the leading 12 life sciences companies indicates that R&D returns have been declining since 2010. Life sciences innovators are feeling the impact of rising costs and a decline in forecasted sales revenues driven by an age of austerity and the patent expirations of many blockbuster drugs. The cost of developing an asset from discovery to commercialization has increased by 18 per cent, from $1,094 million in 2010 to $1,290 million in 2013. Over the same time period, the forecasted peak sales (highest-value sales in a single year) of an asset have declined by over 40 percent, down from $816 million in 2010 to $466 million in 2013. Total forecast sales over the lifetime of a product have also declined since 2010 and, in 2013 are estimated to be $4.6 billion.
I don't know how much those estimates jump around under normal conditions, but those are certainly not good trends. The overall picture, while not relentlessly gloomy, is certainly not relentlessly happy. You can extract some combinations out of the report that sound good, though, sort of like those Hollywood movie-deal pitches - you know, "Ocean's 11. . .in space!". In this case, I think the pitch would be "Biosimilars. . .in China!".
+ TrackBacks (0) | Category: Business and Markets
I'll be visiting Illinois tomorrow to give a talk as part of the university's Chemistry-Biology Interface program. The first week of March is probably not the time to see the Champaign-Urbana landscape at its best, but then, I'm not leaving a lot of scenic grandeur behind in the Boston area this time of year, either. By March, winter's the guy who hasn't realized that everyone else at the party left a while ago. Fortunately, the snow that was forecast to mess with my flights today seems to have vanished, for once. . .
+ TrackBacks (0) | Category: Blog Housekeeping
February 28, 2014
Here's a very nice perspective on what gets funded in drug research and why. Robert Kocher and Bryan Roberts bring their venture-capital viewpoint (Venrock) to the readers of the NEJM:
It is not mysterious why projects get funded. As venture-capital investors, we evaluate projects along four primary dimensions: development costs, selling costs, differentiation of the drug relative to current treatments, and incidence and prevalence of the targeted disease (see table). For a project to be attractive, it needs to be favorably reviewed on at least two of these dimensions. Many drugs designed for orphan diseases and cancers are good investments of scarce capital, since they tend to have relatively low development costs and selling costs and to be strongly differentiated from the current treatment options. Conversely, investors are less likely to fund drugs with much higher development and selling costs (e.g., drugs for type 2 diabetes or psychiatric disorders) and drugs that cannot be strongly differentiated from current treatment options — often because low-cost generics are available to treat the targeted condition — despite the condition's high incidence and prevalence (e.g., drugs for hypertension or hypercholesterolemia).
Since improving the rate of discovery is a rather knotty, multivariate problem, the authors turn to the economic back end of the process. They make the case for the FDA to move more towards conditional approvals, since no Phase III trial can be large enough (or long-running enough) to pick up on all the "long tail" adverse events that might be waiting out there. Current Phase III trials, they say, are often overpowered for efficacy but are still underpowered for rare events, so we're spending a lot of money rather inefficiently.
I think they've got a good point, but the FDA already gets enough flak as it is. Changing things in this way, if done too quickly (and frankly, if done too openly) would be seen by many as a bean-counting technique to shift the risk onto the paying customers. Can't you hear it now? But the world they describe would be a good one, if it's feasible:
We estimate that development costs for drugs could be reduced by as much as 90%, and the time required by 50%, if the threshold for initial approval were defined in terms of efficacy and fundamental safety. Cutting costs and time, while requiring high-quality and transparent patient registries for independent safety monitoring, would be a more informative and cost-effective approach. With the widespread adoption of electronic health records and the introduction of many low-cost data-analysis tools, it is now feasible to develop mandatory postmarketing surveillance programs that make thousand-patient trials obsolete. Large data sets would also inoculate drug makers against spurious claims such as the false association of pancreatitis with the glucagon-like peptide 1 (GLP-1) and dipeptidyl peptidase 4 (DPP-4) inhibitors. At the same time, it is essential to empower the FDA to quickly remove or restrict the use of drugs when safety signals emerge from the improved data and safety monitoring.
This moves beyond clinical science and into politics, which (as the cliché has it) is the art of the possible. Even if we agree that this move is desirable, is it possible, or not?
+ TrackBacks (0) | Category: Clinical Trials
Wavefunction has a post about this paper from J. Med. Chem. on a series of possible antitrypanosomals from the Broad Institute's compound collection. It's a good illustration of the power of internal hydrogen bonds - in this case, one series of isomers can make the bond, but that ties up their polar groups, making them less soluble but more cell-permeable. The isomer that doesn't form the internal H-bond is more polar and more soluble, but less able to get into cells. Edit - fixed this part.
So if your compound has too many polar functionalities, an internal hydrogen bond can be just the thing to bring on better activity, because it tones things down a bit. And there are always the conformational effects to keep in mind. Tying a molecule up like that is the same as any other ring-forming gambit in medicinal chemistry: death or glory. Rarely is a strong conformational restriction silent in the SAR - usually, you either hit the magic conformer, or you move it forever out of reach.
I particularly noticed Wavefunction's line near the close of his post: "If nothing else they provide a few more valuable data points on the way to prediction nirvana.". I know what he's talking about, and I think he's far from the only computational chemist with eschatological leanings. Eventually, you'd think, we'd understand enough about all the things we're trying to model for the models to, well, work. And yes, I know that there are models that work right now, but you don't know that they're going to work until you've messed with them a while, and there are other models that don't work but look equally plausible at first, etc., and very much etc. "Prediction nirvana" would be the state where you have an idea for a new structure, you enter it into your computational model, and it immediately tells you the right answer, every single time. In theory, I think this is a reachable state of affairs. In practice, it is not yet implemented.
And remember, people have spotted glows on that horizon before and proclaimed the imminent dawn. The late 1980s were such a time, but experiences like those tend to make people more reluctant to immanentize the eschaton, or at least not where anyone can hear. But we are learning more about enthalpic and entropic interactions, conformations, hydrogen bonds, nonpolar interactions, all those things that go into computational prediction of structure and binding interactions. And if we continue to learn more, as seems likely, won't there come a point when we've learned what we need to know? If not true computational nirvana, then surely (shrink those epsilons and deltas) as arbitrarily close an approach as we like?
+ TrackBacks (0) | Category: In Silico
February 27, 2014
Ah, the good old central nervous system, and its good old receptors. Especially the good old ion channels - there's an area with enough tricky details built into it to keep us all busy for another few decades. Here's a good illustration, in a new paper from Nature Chemical Biology. The authors, from Berkeley, are looking at the ionotropic glutamate receptors, an important (and brainbendingly complex) group. These are the NMDA, AMPA, and kainate receptors, if you name them by their prototype ligands, and they're assembled as tetramers from mix-and-match subunit proteins, providing a variety of species even before you start talking about splice variants and the like. This paper used a couple of the simpler kainate systems as a proving ground.
They're working with azobenzene-linked compounds that can be photoisomerized, and using that property as a switch. Engineering a Cys residue close to the binding pocket lets them swivel the compound in and out (as shown), and this gives them a chance to see how many of the four individual subunits need to be occupied, and what the states of the receptor are along the way. (The ligand does nothing when it's not tethered to the protein). The diagram shows the possible occupancy states, and the colored-in version shows what they found for receptor activation.
You apparently need two ligands just to get anything to happen (and this is consistent with previous work on these systems). Three ligands buys you more signaling, and the four peaks things out. Patch-clamp studies had already shown that these things are apparently capable of stepwise signaling, and this work nails that down ingeniously. Presumably this whole tetramer setup has been under selection to take advantage of that property, and you'd have to assume that the NMDA and AMPA receptors (extremely common ones, by the way) are behaving similarly. The diagram shows the whole matrix of what seems to be going on.
+ TrackBacks (0) | Category: Biological News
February 26, 2014
If you're in the mood for another reason why you should always be cautious about your biopharma investments, look no further than Galena Biopharma (GALE to its many clueless fans). I've been following this story over the last couple of weeks, and what a mess it is. Galena is a small company in Oregon with a few assets, including a cancer vaccine candidate. Its stock hovers in the low single digits, as is appropriate. But in December and January, it began to trade up, and up. From $2/share to $4. Then to $6, and then higher. And this on no particular news or change in the company's prospects, which for a stock like this is often a sign of "momentum" players getting involved. "Momentum" investing is a fancy name for "I'm buying this because it's going up", and the people who do this sort of thing are understandably anxious for you to buy some, too. They're also very, very unwilling to hear about anything that might cause the stock to go back down, because the proper direction for stocks, we must remember, is up. They only go down because of evil short bashers; everyone knows this.
Adam Feuerstein of TheStreet.com delivered a great big dose of that evil stuff (known to the rest of us as "reality") on February 12 with this article, which showed why the stock had been rising. The company was paying a PR firm to beat the drums for it, said drum-beating going as far as having people post multiple supposedly-independent articles on sites like Seeking Alpha under a list of pseudonyms.
An outfit called the "DreamTeam Group" was hired for the promotion. They run a stable of stock-touting web sites, full of wonderful tales about the companies that are paying them to say these wonderful things. And they spread the word on other sites (as above), and on Twitter, by e-mail and whatever means come to hand. If carrier pigeons come back into fashion, you can count on one fluttering down with a hot stock tip for you. And if you're greedy and stupid, you could see all this hype and convince yourself that a Great Opportunity is spawning right in front of you - why, all these people are buzzing about this hot little company, and money is right there for the taking. The only reason not to get in on a deal like this would be a lack of vision.
Galena's insiders do not lack vision. Indeed, they have proven beyond any doubt that money was in fact there for the taking. GALE peaked at nearly $8/share, but its directors and officers were unloading millions of dollars worth of shares into that market. And who could blame them? These are legal financial transactions between consenting adults, and if one set of those adults know what's going on and the other set doesn't, well, it's that kind of world, isn't it? A look at any jungle will show the larger predators eating the smaller ones, and God knows the Street isn't any different.
Yesterday GALE closed at about $4, and many of its "investors" are hopping mad about that, as a look at Feuerstein's mailbag will show. But here are some cynical people who figure that the company is actually worth about seventy-two cents a share. Reasonable observers can disagree about that figure. But if you want to argue that the company is cruelly undervalued at $4, you are probably not a reasonable observer. Or you bought at $7. Same thing.
Update: if you'd like to know why people are so skeptical of the prospects for Galena's vaccine, look no further than this comment. It's right on target.
+ TrackBacks (0) | Category: Business and Markets | The Dark Side
Here's a look at some of the changes in JACS papers over the decades. Several trends are clear - there are more authors now, and single-author papers have almost vanished. Reference lists are much longer (which surely reflects both the sie of the literature and the relative ease of bibliography compared to the bound-volume/index card days). Have a look at the charts - François-Xavier Coudert, the blog's author, says that he'll be putting up some more later on, and I look forward to seeing what comes up.
+ TrackBacks (0) | Category: The Scientific Literature
February 25, 2014
Back in 2010, I wrote about InterMune's drug for idiopathic pulmonary fibrosis, pirfenidone. The company's stock shot up on hopes that the compound would make it through the FDA, and then went straight back down when those proved ill-founded. The agency asked them for more data, and I wondered at the time if they'd be able to raise enough cash to generate it.
Well, they did, and the effort appears to have been worth it: the company says it met all its endpoints in Phase III, and is headed back to the FDA with what appears to be a solid story. Note that this press release, as opposed to the Pfizer one that I was mentioning earlier today, is full of data.
The company's stock has shot up, once again. If you've been an InterMune investor over the last few years, your fingernails are probably in bad shape and your combover is no longer plausible. The stock has had wild moves on rumors of takeovers (or lack of same) and anticipation of these clinical results. But good for them: they stuck with their compound, and it looks like it's paid off. And, just as a side note, good for people with fibrosis, too, eh?
+ TrackBacks (0) | Category: Business and Markets | Clinical Trials
Here's a nice look at why you should always think about the source of the financial and business information you read. It details the response to a recent Pfizer press release about palbociclib, a CDK inhibitor that's in late clinical trials.
Someone at The Wall Street Journal wrote that it had "the potential. . .to transform the standard of care for post-menopausal women with ER+ and HER2- advanced breast cancer." Problem is, that phrase was lifted directly out of the press release itself (and sure sounds like it), and you really would hope for better from the WSJ. What we're seeing here is actually Pfizer's own spin on the (as yet unpresented) results of the PALOMA-1 clinical trial. Everything a company says at this point will be couched in terms of "could" and "has the potential" and "we hope", and will come with one of those paragraphs at the end about "forward-looking statements". When it comes to the first statements about clinical trials results, if there are no numbers, there is nothing to talk about.
Paul Raeburn, the Knight Science Journalism blog author who picked up on this, also found that someone at the AP (and others) went for Pfizer's spin, too:
The problem is that this story was covered by business reporters rather than medical reporters, who by and large are too smart to fall for a company's claim about a drug without seeing the evidence presented, reviewed, and debated.
The further problem is that because they are so smart, medical writers mostly declined to cover this story. Which left the business writers out there alone, telling the story the company wanted them to tell.
Well, "medical writer" is a broad term, and believe me, there are some slackjaws in that crowd, too. But point taken - anyone who's been paying attention, or anyone who's willing to spend a few minutes on Google, should have realized that Pfizer is trying to make the case for accelerated approval of palbociclib, especially after the recent failure of dacomitinib and strong competition from Novartis in exactly the same therapeutic space.
Pfizer, of course, is not going to come out and talk about how delighted they are about the Phase II results unless they can back that up with something. I hope that palbociclib bowls people over - a new therapy for breast cancer would be good news. But we haven't seen the data yet, and data are all that will (or should) make pulses race over at the FDA. So I think that the Pfizer press release was worth noting, but stories like the Fierce Biotech one linked in the paragraph above are the way to do it. Put the news in context - don't just reword the press release.
+ TrackBacks (0) | Category: Cancer | Press Coverage
February 24, 2014
I wanted to mention the ACS Webinar series on Drug Discovery, which will be going on every so often throughout the year. I'm going to be doing the introductory overview one, along with Rick Connell of Pfizer and Nick Meanwell of BMS, this Thursday, 2PM to 3PM EST. As you'll note from the schedule, there are plenty more of these coming up that go into more detail, so we're going to be setting the stage and taking questions from the audience.
+ TrackBacks (0) | Category: Blog Housekeeping
Several people sent along this article from McKinsey Consulting, on "Why pharma megamergers work". They're looking (as you would expect) at shareholder value, shareholder value, and shareholder value as the main measurements of whether a deal "worked" or not. But John LaMattina, who lived through the Pfizer megamerger era and had a ringside seat, would like to differ with their analysis:
The disruption that the integration process causes is immeasurable. Winners and losers are created as a result of the leader selection process. Scars form as different research teams battle over which projects are superior to others. The angst even extends to one’s home life as people worry if their site will be closed and that they’ll be unemployed or, at best, be asked to uproot their families halfway across the country to a new research location. In such a situation, rumors are rife and speculation rampant. Focus that should be on science inevitably get diverted to one’s personal situation. This isn’t something that lasts just a few weeks. Often the integration process can take as much as a year.
The impact of these changes are not immediate. Rather, they take some years to become apparent. The Pfizer pipeline of experimental medicines, as published on its website, is about 60% of its peak about a decade ago, despite these acquisitions. Clearly, a company’s success isn’t assured by numbers, but one’s chances are enhanced by more R&D opportunities. I would argue these mergers have taken a toll on the R&D organization that wasn’t anticipated a decade ago.
Well, there have been naysayers along the way. "I think the Pfizer-Wyeth merger is a bad idea which will do bad things". "I'm deeply skeptical" is a comment from 2002. And here's 2008: "Pfizer is going to be having a rough time of it for years to come".
But here's where McKinsey's worldview comes in. Look at that last statement of mine, from 2008. If you just look at the stock since that date, well, I've been full of crap, haven't I? PFE has definitely outperformed the S&P 500 since the summer of 2008, and especially since mid-2011. There's your shareholder value right there, and what else is there in this life? But what might they have done, and what might the companies that they bought and pillaged have done, over the years? We'll never know. Things that don't happen, drugs that don't get discovered - they make no sound at all.
+ TrackBacks (0) | Category: Business and Markets | Drug Industry History
When we last checked in on the Great Stapled Peptide Wars, researchers from Genentech, the Walter and Eliza Hall Institute and La Trobe University (the latter two in Australia) had questioned the usefulness and activity of several stapled Bim BH3 peptides. The original researchers (Walensky et al.) had then fired back strongly, pointing out that the criticisms seemed misdirected and directing the authors back to what they thought had been well-documented principles of working with such species.
Now the WEHI/Genentech/La Trobe group (Okamoto et al.) has responded, and it doesn't look like things are going to calm down any time soon. They'd made a lot of the 20-mer stapled peptide being inactive in cells, while the reply had been that yes, that's true, as you might have learned from reading the original papers again - it was the 21-mer that was active in cells. Okamoto and co-workers now say that they've confirmed this, but only in some cell lines - there are others for which the 21-mer is still inactive. What's more, they say that a modified but un-stapled 21-mer is just as active as the closed peptide, which suggests that the stapling might not be the key factor at all.
There's another glove thrown down (again). The earlier Genentech/WEHI/La Trobe paper had shown that the 20-mer had impaired binding to a range of Bcl target proteins. Walensky's team had replied that the 20-mer had been designed to have lower affinity, thus the poor binding results. But this new paper says that the 21-mer shows similarly poor binding behavior, so that can't be right, either.
This is a really short communication, and you get the impression that it was fired off as quickly as possible after the Walensky et al. rebuttal. There will, no doubt, be a reply. One aspect of it, I'm guessing, will be that contention about the unstapled peptide activity. I believe that the Walensky side of the argument have already shown that these substituted-but-unstapled peptides can show enhanced activity, probably due to cranking up their alpha-helical character (just not all the way to stapling them into that form). We shall see.
And this blowup reflects a lot of earlier dispute about Bcl, BAX/BAK peptides, and apoptosis in general. The WEHI group and others have been arguing out the details of these interactions in print for years, and this may be just another battlefield.
+ TrackBacks (0) | Category: Chemical Biology | Drug Assays
February 21, 2014
Update: the nomenclature of these enzymes is messy - see the comments.
Here's another activity-based proteomics result that I've been meaning to link to - in this one, the Cravatt group strengthens the case for carboxylesterase 3 as a potential target for metabolic disease. From what I can see, that enzyme was first identified back in about 2004, one of who-knows-how-many others that have similar mechanisms and can hydrolyze who-knows-how-many esters and ester-like substrates. Picking your way through all those things from first principles would be a nightmare - thus the activity-based approach, where you look for interesting phenotypes and work backwards.
In this case, they were measuring adipocyte behavior, specifically differentiation and lipid accumulation. A preliminary screen suggested that there were a lot of serine hydrolase enzymes active in these cells, and a screen with around 150 structurally diverse carbamates gave several showing phenotypic changes. The next step in the process is to figure out what particular enzymes are responsible, which can be done by fluorescence labeling (since the carbamates are making covalent bonds in the enzyme active sites. They found my old friend hormone-sensitive lipase, as well they should, but there was another enzyme that wasn't so easy to identify.
One particular carbamate, the unlovely but useful WWL113, was reasonably selective for the enzyme of interest, which turned out to be the abovementioned carboxyesterase 3 (Ces3). The urea analog (which should be inactive) did indeed show no cellular readouts, and the carbamate itself was checked for other activities (such as whether it was a PPAR ligand). These established a strong connection between the inhibitor, the enzyme, and the phenotypic effects.
With that in hand, they went on to find a nicer-looking compound with even better selectivity, WWL229. (I have to say, going back to my radio-geek days in the 1970s and early 1980s, that I can't see the letters "WWL" without hearing Dixieland jazz, but that's probably not the effect the authors are looking for). Using an alkyne derivative of this compound as a probe, it appeared to label only the esterase of interest across the entire adipocyte proteome. Interestingly, though, it appears that WWL13 was more active in vivo (perhaps due to pharmacokinetic reasons?)
And those in vivo studies in mice showed that Ces3 inhibition had a number of beneficial effects on tissue and blood markers of metabolic syndrome - glucose tolerance, lipid profiles, etc. Histologically, the most striking effect was the clearance of adipose deposits from the liver (a beneficial effect indeed, and one that a number of drug companies are interested in). This recapitulates genetic modification studies in rodents targeting this enzyme, and shows that pharmacological inhibition could do the job. And while I'm willing to bet that the authors would rather have discovered a completely new enzyme target, this is solid work all by itself.
+ TrackBacks (0) | Category: Biological News | Chemical Biology | Diabetes and Obesity
Just Like Cooking has an overview of some interesting new chemistry from the Hartwig group. They're using a rhodium catalyst to directly functionalize aryl rings with silyl groups (which can be used in a number of transformations downstream). One nice thing is that the selectivities are basically the opposite of the direct borylation reactions, so this could open up some isomers that are otherwise difficult to come by.
See Arr Oh makes a good point about the paper, too - it has a lot of detail in it and a lot of information. If you check out the Supplementary Information, there are about thirty pages of further details, and about sixty pages of spectral data. I particularly like the tables of various reaction conditions, hydrogen acceptors, and ligands. The main paper shows the conditions that work the best, but this gives you a chance to see under the hood at everything else that was tried. Every new methods paper should do this - in fact, every new methods paper should be required to do this. Good stuff.
+ TrackBacks (0) | Category: Chemical News
February 20, 2014
The NIH is starting to wonder what bang-for-the-buck it gets for its grant money. That's a tricky question at best - some research takes a while to make an impact, and the way that discoveries can interact is hard to predict. And how do you measure impact, by the way? These are all worthy questions, but here's apparently the way things are being approached:
Michael Lauer's job at the National Institutes of Health (NIH) is to fund the best cardiology research and to disseminate the results rapidly to other scientists, physicians, and the public. But NIH's peer-review system, which relies on an army of unpaid volunteer scientists to prioritize grant proposals, may be making it harder to achieve that goal. Two recent studies by Lauer, who heads the Division of Cardiovascular Sciences at NIH's National Heart, Lung, and Blood Institute (NHLBI) in Bethesda, Maryland, raise some disturbing questions about a system used to distribute billions of dollars of federal funds each year.
(MiahcalLauer recently analyzed the citation record of papers generated by nearly 1500 grants awarded by NHLBI to individual investigators between 2001 and 2008. He was shocked by the results, which appeared online last month in Circulation Research: The funded projects with the poorest priority scores from reviewers garnered just as many citations and publications as those with the best scores. That was the case even though low-scoring researchers had been given less money than their top-rated peers.
I understand that citations and publications are measurable, while most other ways to gauge importance aren't. But that doesn't mean that they're any good, and I worry that the system is biased enough already towards making these the coin of the realm. This sort of thing worries me, too:
Still, (Richard) Nakamura is always looking for fresh ways to assess the performance of study sections. At the December meeting of the CSR advisory council, for example, he and Tabak described one recent attempt that examined citation rates of publications generated from research funded by each panel. Those panels with rates higher than the norm—represented by the impact factor of the leading journal in that field—were labeled "hot," while panels with low scores were labeled "cold."
"If it's true that hotter science is that which beats the journals' impact factors, then you could distribute more money to the hot committees than the cold committees," Nakamura explains. "But that's only if you believe that. Major corporations have tried to predict what type of science will yield strong results—and we're all still waiting for IBM to create a machine that can do research with the highest payoff," he adds with tongue in cheek.
"I still believe that scientists ultimately beat metrics or machines. But there are serious challenges to that position. And the question is how to do the research that will show one approach is better than another."
I'm glad that he doesn't seem to be taking this approach completely seriously, but others may. If only impact factors and citation rates were real things that advanced human knowledge, instead of games played by publishers and authors!
+ TrackBacks (0) | Category: The Scientific Literature | Who Discovers and Why
Here's Ian Read of Pfizer, on that company's reputation (and that of pharma in general):
. . .many people — including not only regulators but also legislators and their constituents — have a say in how we can conduct our business. At the same time, many have a great and sometimes emotionally charged interest in what our business produces, what we charge for our products and how we sell them, among other topics. And all of this together shines a brighter light on our business than most others, which makes our reputation all the more important to us. In fact, everything from government reimbursement for our medicines to protection of our intellectual property to our ability to continue innovating in our labs depends on our reputation. Indeed, our virtual license to operate depends on this. It depends on earning the respect of our regulators, legislators, healthcare professionals, patients, R&D partners and of our employees, current and future.
This is why we made “earning greater respect from society” one of our four business imperatives not long after I was named CEO of Pfizer in late 2010.
Without this respect and the consideration that comes with it we could not sustain our business, with its innumerable collaborative dependencies and its central place in an area of life so important to us all, our health. Making reputation and respect all the more important to us is knowing that we gain it in drops, but lose it in gallons.
True enough. Has Pfizer lost a gallon or two? He doesn't really say. His piece also does not say if there are any specific actions that Pfizer (or other companies) have taken that might have caused some of this respect leakage. Nor does it go into any detail about what steps might be taken to get any of it back, other than boardroom-speak like "connect better with our stakeholders". But it's a start, I suppose.
+ TrackBacks (0) | Category: Why Everyone Loves Us
February 19, 2014
This is a very timely post indeed from Peter Murray-Rust. He's describing a system that his group has developed (ChemVisitor) to dig through the chemical literature looking for incorrect structures (and much more).
He shows examples from an open-access paper, in which one of the structures is in fact misdrawn. But how would Elsevier, Nature, the ACS, Wiley or the other big publishers take to having these things highlighted every day of the week. Not well:
So try it for yourself. Which compound is wrong? (*I* don’t know yet) How would you find out? Maybe you would go to Chemical Abstracts (ACS). Last time I looked it cost 6USD to look up a compound. That’s 50 dollars, just to check whether the literature is right. And you would be forbidden from publishing what you found there (ACS sent the lawyers to Wikipedia for publishing CAS registry numbers). What about Elsevier’s Reaxys? Almost certainly as bad.
But isn’t there an Open collection of molecules? Pubchem in the NIH? Yes, and ACS lobbied on Capitol Hill to have it shut down as it was “socialised science instead of the private sector”. They nearly won. (Henry Rzepa and I ran a campaign to highlight the issue). So yes, we can use Pubchem and we have and that’s how Andy’s software discovered the mistake.
This was the first diagram we analysed. Does that mean that every paper in the literature contains mistakes?
Almost certainly yes.
But they have been peer-reviewed.
Yes – and we wrote software (OSCAR) 10 years ago that could do the machine reviewing. And it showed mistakes in virtually every paper.
So we plan to do this for every new paper. It’s technically possible. But if we do it what will happen?
If I sign the Elsevier content-mining click-through (I won’t) then I agree not to disadvantage Elsevier’s products. And pointing out publicly that they are full of errors might just do that. And if I don’t?…
This comment on Ycombinator is from someone who's seen some of the Murray-Rost group's software in action, and is very interesting indeed:
They can take an ancient paper with very low quality diagrams of complex chemical structures, parse the image into an open markup language and reconstruct the chemical formula and the correct image. Chemical symbols are just one of many plugins for their core software which interprets unstructured, information rich data like raster diagrams. They also have plugins for phylogenetic trees, plots, species names, gene names and reagents. You can develop plugins easily for whatever you want, and they're recruiting open source contributors (see https://solvers.io/projects/QADhJNcCkcKXfiCQ6, https://solvers.io/projects/4K3cvLEoHQqhhzBan).
As a side effect of how their software works, it can detect tiny suggestive imperfections in images that reveal scientific fraud. I was shown a demo where a trace from a mass spec (like this http://en.wikipedia.org/wiki/File:ObwiedniaPeptydu.gif) was analysed. As well as reading the data from the plot, it revealed a peak that had been covered up with a square - the author had deliberately obscured a peak in their data that was inconvenient. Scientific fraud. It's terrifying that they find this in most chemistry papers they analyse.
Peter's group can analyse thousands or hundreds of thousands of papers an hour, automatically detecting errors and fraud. . .
Unless I'm very much mistaken, we'll be hearing a lot more about this. It touches on the quality of the literature, the quality of the people writing the papers, and the business model(s) of the people publishing it all. And these are very, very relevant topics that are are getting more important all the time. . .
+ TrackBacks (0) | Category: The Scientific Literature
I'd like to throw a few more logs on the ligand efficiency fire. Chuck Reynolds of J&J (author of several papers on the subject, as aficionados know) left a comment to an earlier post that I think needs some wider exposure. I've added links to the references:
An article by Shultz was highlighted earlier in this blog and is mentioned again in this post on a recent review of Ligand Efficiency. Shultz’s criticism of LE, and indeed drug discovery “metrics” in general hinges on: (1) a discussion about the psychology of various metrics on scientists' thinking, (2) an assertion that the original definition of ligand efficiency, DeltaG/HA, is somehow flawed mathematically, and (3) counter examples where large ligands have been successfully brought to the clinic.
I will abstain from addressing the first point. With regard to the second, the argument that there is some mathematical rule that precludes dividing a logarithmic quantity by an integer is wrong. LE is simply a ratio of potency per atom. The fact that a log is involved in computing DeltaG, pKi, etc. is immaterial. He makes a more credible point that LE itself is on average non-linear with respect to large differences in HA count. But this is hardly a new observation, since exactly this trend has been discussed in detail by previous published studies (here, here, here, and here). It is, of course, true that if one goes to very low numbers of heavy atoms the classical definition of LE gets large, but as a practical matter medicinal chemists have little interest in extremely small fragments, and the mathematical catastrophe he warns us against only occurs when the number of heavy atoms goes to zero (with a zero in the denominator it makes no difference if there is a log in the numerator). Why would HA=0 ever be relevant to a med. chem. program? In any case a figure essentially equivalent to the prominently featured Figure 1a in the Shultz manuscript appears in all of the four papers listed above. You just need to know they exist.
With regard to the third argument, yes of course there are examples of drugs that defy one or more of the common guidelines (e.g MW). This seems to be a general problem of the community taking metrics and somehow turning them into “rules.” They are just helpful, hopefully, guideposts to be used as the situation and an organization’s appetite for risk dictate. One can only throw the concept of ligand efficiency out the window completely if you disagree with the general principle that it is better to design ligands where the atoms all, as much as possible, contribute to that molecule being a drug (e.g. potency, solubility, transport, tox, etc.). The fact that there are multiple LE schemes in the literature is just a natural consequence of ongoing efforts to refine, improve, and better apply a concept that most would agree is fundamental to successful drug discovery.
Well, as far as the math goes, dividing a log by an integer is not any sort of invalid operation. I believe that [log(x)]/y is the same as saying log(x to the one over y). That is, log(16) divided by 2 is the same as the log of 16 to the one-half power, or log(4). They both come out to about 0.602. Taking a BEI calculation as real chemistry example, a one-micromolar compound that weighs 250 would, by the usual definition, -log(Ki)/(MW/1000), have a BEI of 6/0.25, or 24. By the above rule, if you want to keep everything inside the log function, then say -log(0.0000001) divided by 0.25, that one-micromolar figure should be raised to the fourth power, then you take the log of the result (and flip the sign). One-millionth to the fourth power is one times ten to the minus twenty-fourth, so that gives you. . .24. No problem.
Shultz's objection that LE is not linear per heavy atom, though, is certainly valid, as Reynolds notes above as well. You have to realize this and bear it in mind while you're thinking about the topic. I think that one of the biggest problems with these metrics - and here's a point that both Reynolds and Shultz can agree on, I'll bet - is that they're tossed around too freely by people who would like to use them as a substitute for thought in the first place.
+ TrackBacks (0) | Category: Drug Assays | Drug Development | In Silico
February 18, 2014
My recent memorial notice for Alan Katritzky brought an interesting comment, by someone who did the math that I didn't. His total publication count actually seems to be 2215 papers published since 1953. That comes to one paper every ten days over sixty-one years. I have trouble even imagining that. This is a vast amount of work, both the chemistry and the writing, and it's a monument that very, very few people will leave behind them.
Should anyone, though? I mean no disrespect to Katritzky's memory by asking this question, let me note quickly. But I wrote about this here a few years ago, the idea that a person can, in fact, publish too many papers. My example then was H. C. Brown - I'm not sure how many papers he co-authored, but the figure is a large one and brings on similar thoughts.
There have been many scientists on the other end of the scale, going back to Isaac Newton, who had to be badgered into letting everyone know that he'd revolutionized physics. Lars Onsager is a good example from the physics/chemistry borderlands - there are stories told about the work he had stored away in his filing cabinets, a terrifying stockpile that most other people would have found very much to their credit to have published. This approach is clearly not the way to go, either - people don't have a chance to build on your work when they don't know about it, and others may spend a good amount of effort duplicating things when you could (and should) have saved them the effort.
But publishing thousands of papers doesn't seem like a good alternative, to be honest. People will have trouble sorting out the good parts, and they can't all be good parts. Time will settle that question, you might think, but time could bungle that job. Time has dropped the ball before.
Update: See Arr Oh did a very interesting list last year of the organic chemists who publish the most. Katritzky did indeed lead the field.
+ TrackBacks (0) | Category: The Scientific Literature
Oh, @#$!. That was my first comment when I saw this story. That extraordinary recent work on creating stem cells by subjected normal cells to acid stress is being investigated:
The RIKEN centre in Kobe announced on Friday that it is looking into alleged irregularities in the work of biologist Haruko Obokata, who works at the institution. She shot to fame last month as the lead author on two papers published in Nature that demonstrated a simple way to reprogram mature mice cells into an embryonic state by simply applying stress, such as exposure to acid or physical pressure on cell membranes. The RIKEN investigation follows allegations on blog sites about the use of duplicated images in Obokata’s papers, and numerous failed attempts to replicate her results.
PubPeer gets the credit for bringing some of the problems into the light. There are some real problems with figures in the two papers, as well as earlier ones from the same authors. These might be explicable as cimple mistakes, which is what the authors seem to be claiming, if it weren't for the fact that no one seems to be able to get the stem-cell results to reproduce. There are mitigating factors there, too - different cell lines, perhaps the lack of a truly detailed protocol from the original paper. But a paper should have enough details in it to be reproduced, shouldn't it?
Someone on Twitter was trying to tell me the other day that the whole reproducibility issue was being blown out of proportion. I don't think so. The one thing we seem to be able to reproduce is trouble.
Update: a list of the weirdest things (so far) about this whole business.
+ TrackBacks (0) | Category: Biological News | The Scientific Literature
February 17, 2014
I see that as of this moment, the five articles at the top of the Organic Letters ASAP feed are all corrections from the Nakada group. It looks to be the same situation as the Fukuyama corrections: NMR editing. The corrections (visible for free because you can see the first page) all mention that the conclusions of the papers are not altered, nor are the yields of products. So Amos Smith really is serious about his data-cleaning crusade, and I can see where he's coming from. Falsus in unum, falsus in omnibus. The only place to draw the line is right back at the start.
+ TrackBacks (0) | Category: The Scientific Literature
I'd like to recommend this article from Nature (which looks to be open access). It details the problems with using p-values for statistics, and it's simultaneously interesting and frustrating to read. The frustrating part is that the points it makes have been made many times before, but to little or no effect. P-values don't mean what a lot of people think that they mean, and what meaning that have can be obscured by circumstances. There really should be better ways for scientists to communicate the statistical strength of their results:
One result is an abundance of confusion about what the P value means. Consider Motyl's study about political extremists. Most scientists would look at his original P value of 0.01 and say that there was just a 1% chance of his result being a false alarm. But they would be wrong. The P value cannot say this: all it can do is summarize the data assuming a specific null hypothesis. It cannot work backwards and make statements about the underlying reality. That requires another piece of information: the odds that a real effect was there in the first place. To ignore this would be like waking up with a headache and concluding that you have a rare brain tumour — possible, but so unlikely that it requires a lot more evidence to supersede an everyday explanation such as an allergic reaction. The more implausible the hypothesis — telepathy, aliens, homeopathy — the greater the chance that an exciting finding is a false alarm, no matter what the P value is.
Critics also bemoan the way that P values can encourage muddled thinking. A prime example is their tendency to deflect attention from the actual size of an effect. Last year, for example, a study of more than 19,000 people showed that those who meet their spouses online are less likely to divorce (p < 0.002) and more likely to have high marital satisfaction (p < 0.001) than those who meet offline (see Nature http://doi.org/rcg; 2013). That might have sounded impressive, but the effects were actually tiny: meeting online nudged the divorce rate from 7.67% down to 5.96%, and barely budged happiness from 5.48 to 5.64 on a 7-point scale. To pounce on tiny P values and ignore the larger question is to fall prey to the “seductive certainty of significance”, says Geoff Cumming, an emeritus psychologist at La Trobe University in Melbourne, Australia. But significance is no indicator of practical relevance, he says: “We should be asking, 'How much of an effect is there?', not 'Is there an effect?'”
The article has some suggestions about what to do, but seems guardedly pessimistic about the likelihood of change. The closer you look at it, though, the more our current system looks like an artifact that was never meant to be used in the way we're using it.
+ TrackBacks (0) | Category: General Scientific News
I've received word that well-known organic chemist Alan Katritzky has passed away. He's famous for his work on the use of benzotriazole compounds, and a great deal of other heterocyclic chemistry besides (2,170 papers!)
I first heard him speak in the early 1990s at the Heterocycles Gordon Conference, back in its old location in New Hampshire. And although I 'd been warned to sit near the back of the conference room, I still wasn't ready for the. . .vigor he brought to his presentation. Katritzky had clearly honed his lecturing style in large, unamplified halls, and could be easily heard outside on the lawn. The next day, Stuart McCombie opened the morning program by thanking him for ". . .sharing with me the last secret of benzotriazole. He sprinkled some down my throat AND I NEVER NEED A MICROPHONE AGAIN!"
Katritzky was a link to another era of chemistry (he studied under Sir Robert Robinson), but he leaves behind a huge legacy of work for the modern researcher. He may well have been too productive for his accomplishments to be easily categorized, or at least not yet (those 2,170 papers. . .), but there's no doubt that his name will live on.
+ TrackBacks (0) | Category: Chemical News