About this Author
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
In the Pipeline:
Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline
May 17, 2013
Compare and contrast. Here we have Krishnan Ramalingam, from Ranbaxy's Corporate Communications department, in 2006:
Being a global pharmaceutical major, Ranbaxy took a deliberate decision to pool its resources to fight neglected disease segments. . .Ranbaxy strongly felt that generic antiretrovirals are essential in fighting the world-wide struggle against HIV/AIDS, and therefore took a conscious decision to embark upon providing high quality affordable generics for patients around the world, specifically for the benefit of Least Developed Countries. . .Since 2001, Ranbaxy has been providing antiretroviral medicines of high quality at affordable prices for HIV/AIDS affected countries for patients who might not otherwise be able to gain access to this therapy.
And here we have them in an advertorial section of the South African Mail and Guardian newspaper, earlier this year:
Ranbaxy has a long standing relationship with Africa. It was the first Indian pharmaceutical company to set up a manufacturing facility in Nigeria, in the late 1970s. Since then, the company has established a strong presence in 44 of the 54 African countries with the aim of providing quality medicines and improving access. . .Ranbaxy is a prominent supplier of Antiretroviral (ARV) products in South Africa through its subsidiary Sonke Pharmaceuticals. It is the second largest supplier of high quality affordable ARV products in South Africa which are also extensively used in government programs providing access to ARV medicine to millions.
Yes, as Ranbaxy says on its own web site: "At Ranbaxy, we believe that Anti-retroviral (ARV) therapy is an essential tool in waging the war against HIV/AIDS. . .We estimate currently close to a million patients worldwide use our ARV products for their daily treatment needs. We have been associated with this cause since 2001 and were among the first generic companies to offer ARVs to various National AIDS treatment programmes in Africa. We were also responsible for making these drugs affordable in order to improve access. . ."
And now we descend from the heights. Here, in a vivid example of revealed preference versus stated preference, is what was really going on, from that Fortune article I linked to yesterday:
. . .as the company prepared to resubmit its ARV data to WHO, the company's HIV project manager reiterated the point of the company's new strategy in an e-mail, cc'ed to CEO Tempest. "We have been reasonably successful in keeping WHO from looking closely at the stability data in the past," the manager wrote, adding, "The last thing we want is to have another inspection at Dewas until we fix all the process and validation issues once and for all."
. . .(Dinesh) Thakur knew the drugs weren't good. They had high impurities, degraded easily, and would be useless at best in hot, humid conditions. They would be taken by the world's poorest patients in sub-Saharan Africa, who had almost no medical infrastructure and no recourse for complaints. The injustice made him livid.
Ranbaxy executives didn't care, says Kathy Spreen, and made little effort to conceal it. In a conference call with a dozen company executives, one brushed aside her fears about the quality of the AIDS medicine Ranbaxy was supplying for Africa. "Who cares?" he said, according to Spreen. "It's just blacks dying."
I have said many vituperative things about HIV hucksters like Matthias Rath, who have told patient in South Africa to throw away their antiviral medications and take his vitamin supplements instead. What, then, can I say about people like this, who callously and intentionally provided junk, labeled as what were supposed to be effective drugs, to people with no other choice and no recourse? If this is not criminal conduct, I'd very much like to know what is.
And why is no one going to jail? I'm suggesting jail as a civilized alternative to a barbaric, but more appealingly direct form of justice: shipping the people who did this off to live in a shack somewhere in southern Africa, infected with HIV, and having them subsist as best they can on the drugs that Ranbaxy found fit for their sort.
+ TrackBacks (0) | Category: Infectious Diseases | The Dark Side
May 16, 2013
Here's an excellent, detailed look from Fortune at how things went off the rails at Ranbaxy and their generic atorvastatin (Lipitor). The company has been hit by a huge fine, and no wonder. This will give you the idea:
On May 13, Ranbaxy pleaded guilty to seven federal criminal counts of selling adulterated drugs with intent to defraud, failing to report that its drugs didn't meet specifications, and making intentionally false statements to the government. Ranbaxy agreed to pay $500 million in fines, forfeitures, and penalties -- the most ever levied against a generic-drug company. (No current or former Ranbaxy executives were charged with crimes.) Thakur's confidential whistleblower complaint, which he filed in 2007 and which describes how the company fabricated and falsified data to win FDA approvals, was also unsealed. Under federal whistleblower law, Thakur will receive more than $48 million as part of the resolution of the case. . .
. . .(he says that) they stumbled onto Ranbaxy's open secret: The company manipulated almost every aspect of its manufacturing process to quickly produce impressive-looking data that would bolster its bottom line. "This was not something that was concealed," Thakur says. It was "common knowledge among senior managers of the company, heads of research and development, people responsible for formulation to the clinical people.
Lying to regulators and backdating and forgery were commonplace, he says. The company even forged its own standard operating procedures, which FDA inspectors rely on to assess whether a company is following its own policies. Thakur's team was told of one instance in which company officials forged and backdated a standard operating procedure related to how patient data are stored, then aged the document in a "steam room" overnight to fool regulators.
Company scientists told Thakur's staff that they were directed to substitute cheaper, lower-quality ingredients in place of better ingredients, to manipulate test parameters to accommodate higher impurities, and even to substitute brand-name drugs in lieu of their own generics in bioequivalence tests to produce better results."
You name it, it's probably there. Good thing the resulting generic drugs were cheap, eh? And I suppose these details render inoperative, as the Nixon staff used to say, the explanations that the company used to have about talk of such problems, that it was all the efforts of their big pharma competitors and some unscrupulous stock market types. (Whenever you see a company's CEO going on about a conspiracy to depress his company's share price, you should worry).
The whole article is well worth reading - your eyebrows are guaranteed to go up a few times. This whole affair has been a damaging blow to the whole offshore generics business, India's in particular, and does not help them wear their "Low cost drugs for the poor" halo any better. Not when your pills have glass particles in them along with (or instead of) the active ingredient. . .
+ TrackBacks (0) | Category: The Dark Side
"Can you respond to this tripe?" asked one of the emails that sent along this article in The Atlantic. I responded that I was planning to, but that things were made more complicated by my being extensively quoted in said tripe. Anyway, here goes.
The article, by Brian Till of the New America Foundation, seems somewhat confused, and is written in a confusing manner. The title is "How Drug Companies Keep Medicine Out of Reach", but the focus is on neglected tropical diseases, not all medicine. Well, the focus is actually on a contested WHO treaty. But the focus is also on the idea of using prizes to fund research, and on the patent system. And the focus is on the general idea of "delinking" R&D from sales in the drug business. Confocal prose not having been perfected yet, this makes the whole piece a difficult read, because no matter which of these ideas you're waiting to hear about, you end up having a long wait while you work your way through the other stuff. There are any number of sentences in this piece that reference "the idea" and its effects, but there is no sentence that begins with "Here's the idea"
I'll summarize: the WHO treaty in question is as yet formless. There is no defined treaty to be debated; one of the article's contentions is that the US has blocked things from even getting that far. But the general idea is that signatory states would commit to spending 0.01% of GDP on neglected diseases each year. Where this money goes is not clear. Grants to academia? Setting up new institutes? Incentives to commercial companies? And how the contributions from various countries are to be managed is not clear, either: should Angola (for example) pool its contributions with other countries (or send them somewhere else outright), or are they interested in setting up their own Angolan Institute of Tropical Disease Research?
The fuzziness continues. You will read and read through the article trying to figure out what happens next. The "delinking" idea comes in as a key part of the proposed treaty negotiations, with the reward for discovery of a tropical disease treatment coming from a prize for its development, rather than patent exclusivity. But where that money comes from (the GDP-linked contributions?) is unclear. Who sets the prize levels, at what point the money is awarded, who it goes to: hard to say.
And the "Who it goes to" question is a real one, because the article says that another part of the treaty would be a push for open-source discovery on these diseases (Matt Todd's malaria efforts at Sydney are cited). This, though, is to a great extent a whole different question than the source-of-funds one, or the how-the-prizes-work one. Collaboration on this scale is not easy to manage (although it might well be desirable) and it can end up replacing the inefficiencies of the marketplace with entirely new inefficiencies all its own. The research-prize idea seems to me to be a poor fit for the open-collaboration model, too: if you're putting up a prize, you're saying that competition between different groups will spur them on, which is why you're offering something of real value to whoever finishes first and/or best. But if it's a huge open-access collaboration, how do you split up the prize, exactly?
At some point, the article's discussion of delinking R&D and the problems with the current patent model spread fuzzily outside the bounds of tropical diseases (where there really is a market failure, I'd say) and start heading off into drug discovery in general. And that's where my quotes start showing up. The author did interview me by phone, and we had a good discussion. I'd like to think that I helped emphasize that when we in the drug business say that drug discovery is hard, that we're not just putting on a show for the crowd.
But there's an awful lot of "Gosh, it's so cheap to make these drugs, why are they so expensive?" in this piece. To be fair, Till does mention that drug discovery is an expensive and risky undertaking, but I'm not sure that someone reading the article will quite take on board how expensive and how risky it is, and what the implications are. There's also a lot of criticism of drug companies for pricing their products at "what the market will bear", rather than as some percentage of what it cost to discover or make them. This is a form of economics I've criticized many times here, and I won't go into all the arguments again - but I will ask:what other products are priced in such a manner? Other than what customers will pay for them? Implicit in these arguments is the idea that there's some sort of reasonable, gentlemanly profit that won't offend anyone's sensibilities, while grasping for more than that is just something that shouldn't be allowed. But just try to run an R&D-driven business on that concept. I mean, the article itself details the trouble that Eli Lilly, AstraZeneca, and others are facing with their patent expirations. What sort of trouble would they be in if they'd said "No, no, we shouldn't make such profits off our patented drugs. That would be indecent." Even with those massive profits, they're in trouble.
And that brings up another point: we also get the "Drug companies only spend X pennies per dollar on R&D". That's the usual response to pointing out situations like Lilly's; that they took the money and spent it on fleets of yachts or something. The figure given in the article is 16 cents per dollar of revenue, and it's prefaced by an "only". Only? Here, go look at different industries, around the world, and find one that spends more. By any industrial standard, we are plowing massive amounts back into the labs. I know that I complain about companies doing things like stock buybacks, but that's a complaint at the margin of what is already pretty impressive spending.
To finish up, here's one of the places I'm quoted in the article:
I asked Derek Lowe, the chemist and blogger, for his thoughts on the principle of delinking R&D from the actual manufacture of drugs, and why he thought the industry, facing such a daunting outlook, would reject an idea that could turn fallow fields of research on neglected diseases into profitable ones. "I really think it could be viable," he said. "I would like to see it given a real trial, and neglected diseases might be the place to do it. As it is, we really already kind of have a prize model in the developed countries, market exclusivity. But, at the same time, you could look at it and it will say, 'You will only make this amount of money and not one penny more by curing this tropical disease.' Their fear probably is that if that model works great, then we'll move on to all the other diseases."
What you're hearing is my attempt to bring in the real world. I think that prizes are, in fact, a very worthwhile thing to look into for market failures like tropical diseases. There are problems with the idea - for one thing, the prize payoff itself, compared with the time and opportunity cost, is hard to get right - but it's still definitely worth thinking about. But what I was trying to tell Brian Till was that drug companies would be worried (and rightly) about the extension of this model to all other disease areas. Wrapped up in the idea of a research-prize model is the assumption that someone (a wise committee somewhere) knows just what a particular research result is worth, and can set the payout (and afterwards, the price) accordingly. This is not true.
There's a follow-on effect. Such a wise committees might possibly feel a bit of political pressure to set those prices down to a level of nice and cheap, the better to make everyone happy. Drug discovery being what it is, it would take some years before all the gears ground to a halt, but I worry that something like this might be the real result. I find my libertarian impulses coming to the fore whenever I think about this situation, and that prompts me to break out an often-used quote from Robert Heinlein:
Throughout history, poverty is the normal condition of man. Advances which permit this norm to be exceeded — here and there, now and then — are the work of an extremely small minority, frequently despised, often condemned, and almost always opposed by all right-thinking people. Whenever this tiny minority is kept from creating, or (as sometimes happens) is driven out of a society, the people then slip back into abject poverty.
This is known as "bad luck."
+ TrackBacks (0) | Category: Drug Development | Drug Prices | Why Everyone Loves Us
May 15, 2013
I was talking with someone the other day about the most difficult targets and therapeutic areas we knew, and that brought up the question: which of these has had the greatest number of clinical failures? Sepsis was my nomination: I know that there have been several attempts, all of which have been complete washouts. And for mechanisms, defined broadly, I nominate PPAR ligands. The only ones to make it through were the earliest compounds, discovered even before their target had been identified. What other nominations do you have?
+ TrackBacks (0) | Category: Clinical Trials | Drug Industry History
Speaking about open-source drug discovery (such as it is) and sharing of data sets (such as they are), I really should mention a significant example in this area: the GSK Published Kinase Inhibitor Set. (It was mentioned in the comments to this post). The company has made 367 compounds available to any academic investigator working in the kinase field, as long as they make their results publicly available (at ChEMBL, for example). The people at GSK doing this are David Drewry and William Zuercher, for the record - here's a recent paper from them and their co-workers on the compound set and its behavior in reporter-gene assays.
Why are they doing this? To seed discovery in the field. There's an awful lot of chemical biology to be done in the kinase field, far more than any one organization could take on, and the more sets of eyes (and cerebral cortices) that are on these problems, the better. So far, there have been about 80 collaborations, mostly in Europe and North America, all the way from broad high-content phenotypic screening to targeted efforts against rare tumor types.
The plan is to continue to firm up the collection, making more data available for each compound as work is done on them, and to add more compounds with different selectivity profiles and chemotypes. Now, the compounds so far are all things that have been published on by GSK in the past, obviating concerns about IP. There are, though, a multitude of other compounds in the literature from other companies, and you have to think that some of these would be useful additions to the set. How, though, does one get this to happen? That's the stage that things are in now. Beyond that, there's the possibility of some sort of open network to optimize entirely new probes and tools, but there's plenty that could be done even before getting to that stage.
So if you're in academia, and interested in kinase pathways, you absolutely need to take a look at this compound set. And for those of us in industry, we need to think about the benefits that we could get by helping to expand it, or by starting similar efforts of our own in other fields. The science is big enough for it. Any takers?
+ TrackBacks (0) | Category: Academia (vs. Industry) | Biological News | Chemical News | Drug Assays
May 14, 2013
I mentioned Microryza in that last post. Here's Prof. Michael Pirrung, at UC Riverside, with an appeal there to fund the resynthesis of a compound for NCI testing against renal cell carcinoma. It will provide an experienced post-doc's labor for a month to prepare an interesting natural-product-derived proteasome inhibitor that the NCI would like to take to their next stage of evaluation. Have a look - you might be looking at the future of academic research funding, or at least a real part of it.
+ TrackBacks (0) | Category: Cancer | General Scientific News
Crowdfunding academic research might be changing, from a near-stunt to an widely used method of filling gaps in a research group's money supply. At least, that's the impression this article at Nature Jobs gives:
The practice has exploded in recent years, especially as success rates for research-grant applications have fallen in many places. Although crowd-funding campaigns are no replacement for grants — they usually provide much smaller amounts of money, and basic research tends to be less popular with public donors than applied sciences or arts projects — they can be effective, especially if the appeals are poignant or personal, involving research into subjects such as disease treatments.
The article details several venues that have been used for this sort of fund-raising, including Indiegogo, Kickstarter, RocketHub, FundaGeek, and SciFund Challenge. I'd add Microryza to that list. And there's a lot of good advice for people thinking about trying it themselves, including how much money to try for (at least at first), the timelines one can expect, and how to get your message out to potential donors.
Overall, I'm in favor of this sort of thing, but there are some potential problems. This gives the general pubic a way to feel more connected to scientific research, and to understand more about what it's actually like, both of which are goals I feel a close connection to. But (as that quote above demonstrates), some kinds of research are going to be an easier sell than others. I worry about a slow (or maybe not so slow) race to the bottom, with lab heads overpromising what their research can deliver, exaggerating its importance to immediate human concerns, and overselling whatever results come out.
These problems have, of course, been noted. Ethan Perlstein, formerly of Princeton, used RocketHub for his crowdfunding experiment that I wrote about here. And he's written at Microryza with advice about how to get the word out to potential donors, but that very advice has prompted a worried response over at SciFund Challenge, where Jai Ranganathan had this to say:
His bottom line? The secret is to hustle, hustle, hustle during a crowdfunding campaign to get the word out and to get media attention. With all respect to Ethan, if all researchers running campaigns follow his advice, then that’s the end for science crowdfunding. And that would be a tragedy because science crowdfunding has the potential to solve one of the key problems of our time: the giant gap between science and society.
Up to a point, these two are talking about different things. Perlstein's advice is focused on how to run a successful crowdsourcing campaign (based on his own experience, which is one of the better guides we have so far), while Ranganathan is looking at crowdsourcing as part of something larger. Where they intersect, as he says, is that it's possible that we'll end up with a tragedy of the commons, where the strategy that's optimal for each individual's case turns out to be (very) suboptimal for everyone taken together. He's at pains to mention that Ethan Perlstein has himself done a great job with outreach to the public, but worries about those to follow:
Because, by only focusing on the mechanics of the campaign itself (and not talking about all of the necessary outreach), there lurks a danger that could sink science crowdfunding. Positive connections to an audience are important for crowdfunding success in any field, but they are especially important for scientists, since all we have to offer (basically) is a personal connection to the science. If scientists omit the outreach and just contact audiences when they want money, that will go a long way to poisoning the connections between science and the public. Science crowdfunding has barely gotten started and already I hear continuous complaints about audience exasperation with the nonstop fundraising appeals. The reason for this audience fatigue is that few scientists have done the necessary building of connections with an audience before they started banging the drum for cash. Imagine how poisonous the atmosphere will become if many more outreach-free scientists aggressively cold call (or cold e-mail or cold tweet) the universe about their fundraising pleas.
Now, when it comes to overpromising and overselling, a cynical observer might say that I've just described the current granting system. (And if we want even more of that sort of thing, all we have to do is pass a scheme like this one). But the general public will probably be a bit easier to fool than a review committee, at least, if you can find the right segment of the general public. Someone will probably buy your pitch, eventually, if you can throw away your pride long enough to keep on digging for them.
That same cynical observer might say that I've just described the way that we set up donations to charities, and indeed Ranganathan makes an analogy to NPR's fundraising appeals. That's the high end. The low end of the charitable-donation game is about as low as you can go - just run a search for the words "fake" and "charity" through Google News any day, any time, and you can find examples that will make you ashamed that you have the same number of chromosomes as the people you're reading about. (You probably do). Avoiding this state really is important, and I'm glad that people are raising the issue already.
What if, though, someone were to set up a science crowdfunding appeal, with hopes of generating something that could actually turn a profit, and portions of that to be turned over to the people who put up the original money? We have now arrived at the biopharma startup business, via a different road than usual. Angel investors, venture capital groups, shareholders in an IPO - all of these people are doing exactly that, at various levels of knowledge and participation. The pitch is not so much "Give us money for the good of science", but "Give us money, because here's our plan to make you even more". You will note that the scale of funds raised by the latter technique make those raised by the former look like a roundoff error, which fits in pretty well with what I take as normal human motivations.
But academic science projects have no such pitch to make. They'll have to appeal to altruism, to curiosity, to mood affiliation, and other nonpecuniary motivations. Done well, that can be a very good thing, and done poorly, it could be a disaster.
+ TrackBacks (0) | Category: Academia (vs. Industry) | Business and Markets | General Scientific News
May 13, 2013
I've heard this morning that Astellas is closing the OSI site in Farmingdale, NY, and the Perseid Therapeutics site in Redwood City, CA. More details as I hear them (and check the comments section; people with more direct knowledge may be showing up in there).
+ TrackBacks (0) | Category: Business and Markets
I wanted to mention a new reaction that's come out in a paper in Science. It's from the Betley lab at Harvard, and it's a new way to make densely substituted saturated nitrogen heterocycles (pyrrolidines, in particular).
You start from a four-carbon chain with an azide at one end, and you end up with a Boc-protected pyrrolidine, by direct activation/substitution of the CH bond at the other end of the chain. Longer chains give you mixtures of different ring sizes (4, 5, and 6), depending on where the catalyst feel like inserting the new bond. I'd like to see how many other functional groups this chemistry is compatible with (can you have another tertiary amine in there somewhere, or a hydroxy?) But we have a huge lack of carbon-hydrogen functionalization reactions in this business, and this is a welcome addition to a rather short list.
There was a paper last year from the Groves group at Princeton on fluorination of aliphatic CH bonds using a manganese porphyrin complex. These two papers are similar in my mind - they're modeling themselves on the CYP enzymes, using high-valent metals to accomplish things that normally we wouldn't think of being able to do easily. The more of this sort of thing, the better, as far as I'm concerned: new reactions will make us think of entirely new things
+ TrackBacks (0) | Category: Chemical News
I notice that the recent sequencing of the bladderwort plant is being played in the press in an interesting way: as the definitive refutation of the idea that "junk DNA" is functional. That's quite an about-face from the coverage of the ENCODE consortium's take on human DNA, the famous "80% Functional, Death of Junk DNA Idea" headlines. A casual observer, if there are casual observers of this sort of thing, might come away just a bit confused.
Both types of headlines are overblown, but I think that one set is more overblown than the other. The minimalist bladderwort genome (8.2 x 107 base pairs) is only about half the size of Arabidopsis thaliana, which rose to fame as a model organism in plant molecular biology partly because of its tiny genome. By contrast, humans (who make up so much of my readership), have about 3 x 109 base pairs, almost 40 times as many as the bladderwort. (I stole that line from G. K. Chesterton, by the way; it's from the introduction to The Napoleon of Notting Hill)
But pine trees have eight times as many base pairs as we do, so it's not a plant-versus-animal thing. And as Ed Yong points out in this excellent post on the new work, the Japanese canopy plant comes in at 1.5 x 1011 base pairs, fifty times the size of the human genome and two thousand times the size of the bladderwort. This is the same problem as the marbled lungfish versus pufferfish one that I wrote about here, and it's not a new problem at all. People have been wondering about genome sizes ever since they were able to estimate the size of genomes, because it became clear very quickly that they varied hugely and according to patterns that often make little sense to us.
That's why the ENCODE hype met (and continues to meet) with such a savage reception. It did nothing to address this issue, and seemed, in fact, to pretend that it wasn't an issue at all. Function, function, everywhere you look, and if that means that you just have to accept that the Japanese canopy plant needs the most wildly complex functional DNA architecture in the living world, well, isn't Nature just weird that way?
+ TrackBacks (0) | Category: Biological News
May 10, 2013
The ChEMBL database of compounds has been including bioactivity data for some time, and the next version of it is slated to have even more. There are a lot of numbers out in the open literature that can be collected, and a lot of numbers inside academic labs. But if you want to tap the deepest sources of small-molecule biological activity data, you have to look to the drug industry. We generate vast heaps of such; it's the driveshaft of the whole discovery effort.
But sharing such data is a very sticky issue. No one's going to talk about their active projects, of course, but companies are reluctant to open the books even to long-dead efforts. The upside is seen as small, and the downside (though unlikely) is seen as potentially large. Here's a post from the ChEMBL blog that talks about the problem:
. . .So, what would your answer be if someone asked you if you consider it to be a good idea if they would deposit some of their unpublished bioactivity data in ChEMBL? My guess is that you would be all in favour of this idea. 'Go for it', you might even say. On the other hand, if the same person would ask you what you think of the idea to deposit some of ‘your bioactivity data’ in ChEMBL the situation might be completely different.
First and foremost you might respond that there is no such bioactivity data that you could share. Well let’s see about that later. What other barriers are there? If we cut to the chase then there is one consideration that (at least in my experience) comes up regularly and this is the question: 'What’s in it for me?' Did you ask yourself the same question? If you did and you were thinking about ‘instant gratification’ I haven’t got a lot to offer. Sorry, to disappoint you. However, since when is science about ‘instant gratification’? If we would all start to share the bioactivity data that we can share (and yes, there is data that we can share but don’t) instead of keeping it locked up in our databases or spreadsheets this would make a huge difference to all of us. So far the main and almost exclusive way of sharing bioactivity data is through publications but this is (at least in my view) far too limited. In order to start to change this (at least a little bit) the concept of ChEMBL supplementary bioactivity data has been introduced (as part of the efforts of the Open PHACTS project, http://www.openphacts.org).
There's more on this in an article in Future Medicinal Chemistry. Basically, if an assay has been described in an open scientific publication, the data generated through it qualifies for deposit in ChEMBL. No one's asking for companies to throw open their books, but even when details of a finished (or abandoned) project are published, there are often many more data points generated than ever get included in the manuscript. Why not give them a home?
I get the impression, though, that GSK is the only organization so far that's been willing to give this a try. So I wanted to give it some publicity as well, since there are surely many people who aren't aware of the effort at all, and might be willing to help out. I don't expect that data sharing on this level is going to lead to any immediate breakthroughs, of course, but even though assay numbers like this have a small chance of helping someone, they have a zero chance of helping if they're stuck in the digital equivalent of someone's desk drawer.
What can be shared, should be. And there's surely a lot more that falls into that category than we're used to thinking.
+ TrackBacks (0) | Category: Drug Assays | The Scientific Literature
May 9, 2013
Here's a drug-discovery problem that you don't often have to think about. The anticoagulant field is a huge one, with Plavix, warfarin, and plenty of others jostling for a share of a huge market (both for patients to take themselves, and for hospital use). The Factor Xa inhibitors are a recent entry into this area, with Bayer's Xarelto (rivaroxaban) as the key example so far.
But there's a problem with any Xa inhibitor: there's no antidote for them. Blood clotting therapies have a narrow window to work in - anything effective enough to be beneficial will be effective enough to be trouble under other circumstances. Anticoagulants need a corresponding way to cancel out their effects, in case of overdose or other trouble. (Vitamin K is the answer for warfarin). We don't often have to consider this issue, but it's a big one in this case.
Portola Therapeutics has developed a Factor Xa mimic that binds the inhibitors, and thus titrates their effects. They have their own Xa inhibitor coming along (bextrixaban), but if this protein makes it through, they'll have done the whole field a favor as well as themselves.
+ TrackBacks (0) | Category: Cardiovascular Disease
Vytorin's been discussed several times around here. The combination of Zetia (ezetimibe), the cholesterol absorption inhibitor discovered at Schering-Plough, with Merck's simvastatin looked as if it should be a very effective cholesterol-lowering medication, but the real-world data have been consistentlypuzzling. There's a big trial going on that people are hoping will clarify things, but so far it's had the opposite effect. It's no exaggeration to say that the entire absorption inhibitor/statin combination idea is in doubt, and we may well learn a lot about human lipidology as we figure out what's happened. It will have been an expensive lesson.
So in the midst of all this, what does Merck do but trot out anotherezetimibe/statin combination? Liptruzet has atorvastatin (generic Lipitor) in it, instead of simavastatin (generic Zocor), and what that is supposed to accomplish is a mystery to me. It's a mystery to Josh Bloom over at the American Council for Science and Health, too, and he's out with an op-ed saying that Merck should be ashamed of itself.
I can't see how he's wrong. What I'm seeing is an attempt by Merck to position itself should the ongoing Vytorin trial actually exonerate the combination idea. Vytorin, you see, doesn't have all that much patent lifetime left; its problems since 2008 have eaten the most profitable years right out of its cycle. So if Vytorin turns out to actually work out, after all the exciting plot twists, Merck will be there to tell people that they shouldn't take it. No, they should take exciting new Liptruzet instead. It's newer.
If anyone can think of a reason why this doesn't make Merck look like shady marketeers, I'd like to hear it. And (as Bloom points out) it doesn't make the FDA look all that great, either, since I'm sure that Liptruzet will count towards the end-of-the-year press release about all the innovative new drugs that the agency has approved. Not this time.
Update: John LaMattina's concerned about that last part, too.
+ TrackBacks (0) | Category: Cardiovascular Disease | Why Everyone Loves Us
Want to be weirded out? Study the central nervous system. I started off my med-chem career in CNS drug discovery, and it's still my standard for impenetrability. There's a new paper in Science, though, that just makes you roll your eyes and look up at the ceiling.
The variety of neurotransmitters is well appreciated - you have all these different and overlapping signaling systems using acetylcholine, dopamine, serotonin, and a host of lesser-known molecules, including such oddities as hydrogen sulfide and even carbon monoxide. And on the receiving end, the various subtypes of receptors are well studied, and those give a tremendous boost to the variety of signaling from a single neurotransmitter type. Any given neuron can have several of these going on at the same time - when you consider how many different axons can be sprawled out from a single cell, there's a lot of room for variety.
That, you might think, is a pretty fair amount of complexity. But note also that the density and population of these receptors can change according to environmental stimuli. That's why you get headaches if you don't have your accustomed coffee in the morning (you've made more adenosine A2 receptors, and you haven't put any fresh caffeine ligand into them). Then there are receptor dimers (homo- and hetero-) that act differently than the single varieties, constituitively active receptors that are always on, until a ligand turns them off (the opposite of the classic signaling mechanism), and so on. Now, surely, we're up to a suitable level of complex function.
Har har, says biology. This latest paper shows, by a series of experiment in rats, that a given population of neurons can completely switch the receptor system it uses in response to environmental cues:
Our results demonstrate transmitter switching between dopamine and somatostatin in neurons in the adult rat brain, induced by exposure to short- and long-day photoperiods that mimic seasonal changes at high latitudes. The shifts in SST/dopamine expression are regulated at the transcriptional level, are matched by parallel changes in postsynaptic D2R/SST2/4R expression, and have pronounced effects on behavior. SST-IR/TH-IR local interneurons synapse on CRF-releasing cells, providing a mechanism by which the brain of nocturnal rats generates a stress response to a long-day photoperiod, contributing to depression and serving as functional integrators at the interface of sensory and neuroendocrine responses.
This remains to be demonstrated in human tissue, but I see absolutely no reason what the same sort of thing shouldn't be happening in our heads as well. There may well be a whole constellation of these neurotransmitter switchovers that can take place in response to various cues, but which neurons can do this, involving which signaling regimes, and in response to what stimuli - those are all open questions. And what the couplings are between the environmental response and all the changes in transcription that need to take place for this to happen, those are going to have to be worked out, too.
There may well be drug targets in there. Actually, there are drug targets everywhere. We just don't know what most of them are yet.
+ TrackBacks (0) | Category: The Central Nervous System
May 8, 2013
Over at the Baran group's "Open Flask" blog, there's a post on the number of total synthesis papers that show up in the Journal of the American Chemical Society. I'm reproducing one of the figures below, the percentage of JACS papers with the phrase "total synthesis" in their title.
You can see that the heights of the early 1980s have never been reached again, and that post-2000 there has been a marked drought. As the post notes, JACS seems to have begun publishing many more papers in total around that time (anyone notice this or know anything about it?), and it appears that they certainly didn't fill the new pages with total synthesis. 2013, though, already looks like an outlier, and it's only May.
My own feelings about total synthesis are a matter of record, and have been for some time, if anyone cares. So I'm not that surprised to see the trend in this chart, if trend it is.
But that said, it would be worth running the same analysis on a few other likely journal titles. Has the absolute number of total synthesis papers gone down? Or have they merely migrated (except for the really exceptional ones) to the lower-impact journals? Do fewer papers put the phrase "Total synthesis of. . ." in their titles as compared to years ago? Those are a few of the confounding variables I can think of, and there are probably more. But I think, overall, that the statement "JACS doesn't publish nearly as much total synthesis as it used to" seems to be absolutely correct. Is this a good thing, a bad thing, or some of each?
+ TrackBacks (0) | Category: Chemical News | The Scientific Literature
Cadmium is bad news. Lead and mercury get all the press, but cadmium is just as foul, even if far fewer people encounter it. Never in my career have I had any occasion to use any, and I like it that way. There was an organocadmium reaction in my textbook when I took sophomore organic chemistry, but it was already becoming obsolete, and good riddance, because this one of those metals that's best avoided for life. It has acute toxic effects, chronic toxic effects, and if there are any effects in between those it probably has them, too.
Fortunately, cadmium is not well absorbed from the gut, and even more fortunately, no one eats it. But breathing it, now that's another matter, and if you're a nonchemist wondering how someone can breath metallic elements, then read on. One rather direct way is if someone is careless enough to floof fine powders of them around you. That's how cadmium's toxicity was discovered in the first place, from miners dealing with the dust. But that's only the start. There's a bottom of the list for breathable cadmium, too, which is quite a thought. The general rule is, if you're looking for the worst organic derivatives of any metal, you should hop right on down to the methyl compounds. That's where the most choking vapors, the brightest flames, and the most panicked shouts and heartfelt curses are to be found. Methyl organometallics tend to be small, reactive, volatile, and ready to party.
Dimethyl cadmium, then, represents the demon plunked in the middle of the lowest circle as far as this element is concerned. I'll say only one thing in its favor: it's not quite as reactive as dimethyl zinc, its cousin one row up in the periodic table. No one ever has to worry about inhaling dimethyl zinc; since it bursts into ravenous flames as soon as it hits the air, the topic just never comes up. Then again, when organozincs burn, they turn into zinc oxide, which is inert enough to be used in cosmetics. But slathering your nose with cadmium oxide is not recommended.
Even though dimethylcadmium does not instantly turn into a wall of flame, it can still liven the place up. If you just leave the liquid standing around, hoping it'll go away, there are two outcomes. If you have a nice wide spill of it, with a lot of surface area, you fool, it'll probably still ignite on its own, giving off plenty of poisonous cadmium oxide smoke. If for some reason it doesn't do that, you will still regret your decision: the compound will react with oxygen anyway and form a crust of dimethyl cadmium peroxide, a poorly characterized compound (go figure) which is a friction-sensitive explosive. I've no idea how you get out of that tight spot; any attempts are likely to suddenly distribute the rest of the dimethylcadmium as a fine mist. Water is not the answer. One old literature report says that "When thrown into water, (dimethylcadmium) sinks to the bottom in large drops, which decompose in a series of sudden explosive jerks, with crackling sounds", and you could not ask for a clearer picture of the devil finding work for idle hands. Or idle heads.
Even without all this excitement, the liquid has an alarmingly high vapor pressure, and that vapor is alarmingly well absorbed on inhalation. a few micrograms (yep, millionths of a gram) of it per cubic meter of air hits the legal limits, and I'd prefer to be surrounded by far less. It's toxic to the lungs, naturally, but since it gets into the blood stream so well, it's also toxic to the liver, and to the kidneys (basically, the organs that are on the front lines when it's time to excrete the stuff), and to the brain and nervous system. Cadmium compounds in general have also been confirmed as carcinogenic, should you survive the initial exposure.
After all this, if you still feel the urge to experience dimethylcadmium - stay out of my lab - you can make this fine compound quite easily from cadmium chloride, which I've no particular urge to handle, either, and methyllithium or methyl Grignard reagent. Purifying it away from the ethereal solvents after that route, though, looks like extremely tedious work, which allows you the rare experience of being bored silly by something that's trying to kill you. It is safe to assume that the compound will swiftly penetrate latex gloves, just like deadly and hideous dimethylmercury, so you'll want to devote some time to thinking about how you'll handle the fruits of your labor.
I'm saddened to report that the chemical literature contains descriptions of dimethylcadmium's smell. Whoever provided these reports was surely exposed to far more of the vapor than common sense would allow, because common sense would tell you to stay about a half mile upwind at all times. At any rate, its odor is variously described as "foul", "unpleasant", "metallic", "disagreeable", and (wait for it) "characteristic", which is an adjective that shows up often in the literature with regard to smells, and almost always makes a person want to punch whoever thought it was useful. We can assume that dimethylcadmium is not easily confused with beaujolais in the blindfolded sniff test, but not much more. So if you're working with organocadmium derivatives and smell something nasty, but nasty in a new, exciting way that you've never quite smelled before, then you can probably assume the worst.
Now, as opposed to some of the compounds on my list, you can find people who've handled dimethylcadmium, or even prepared it, worse luck, although it is an (expensive) article of commerce. As mentioned above, it used to be in all the textbooks as a reliable way to form methyl ketones from acid chlorides, but there are far less evil reagents that can do that for you now. It's still used (on a research scale) to make exotic photosensitive and semiconducting materials, but even those hardy folk would love to find an alternative. No, this compound appears to have no fan club whatsoever. Start one at your own risk.
+ TrackBacks (0) | Category: Things I Won't Work With
May 7, 2013
The "New Germ Theory" people may have notched up another one: a pair of reports out from a team in Denmark strongly suggest that many cases of chronic low back pain are due to low-grade bacterial infection. They've identified causative agents (Propionibacterium acnes) by isolating them from tissue, and showed impressive success in the clinic by treating back pain patients with a lengthy course of antibiotics. Paul Ewald is surely smiling about this news, although (as mentioned here) he has some ideas about the drug industry that I can't endorse.
So first we find out that stomach ulcers are not due to over-dominant mothers, and now this. What other hard-to-diagnose infections are we missing? Update - such as obesity, maybe?
+ TrackBacks (0) | Category: Infectious Diseases
In case you're wondering how the deuterated-drugs idea is coming along, the answer seems to be "just fine", at least for Concert Pharamaceuticals. They've announced their third collaboration inside of a year, this time with Celgene.
And they've got their own compound in development, CTP-499, in Phase II for diabetic nephropathy. That's a deutero analog of HDX (1-((S)-5-hydroxyhexyl)-3,7-dimethylxanthine), which is an active metabolite of the known xanthine drug pentoxifylline (which has also been investigated in diabetic kidney disease). You'd assume that deuteration makes this metabolite hang around longer, rather than being excreted, which is just the sort of profile shift that Concert is targeting.
Long-term, the deuteration idea has now diffused out into the general drug discovery world, and there will be no more easy pickings for it (well, at least not so many, depending on how competently patents are drafted). But if Concert can make a success out of what they have going already, they're already set for a longer term than most startups.
+ TrackBacks (0) | Category: Pharmacokinetics
You may remember this case from Chemistry - A European Journal earlier this year, where a paper appeared whose text was largely copy-pasted from a previous JACS paper from another lab. This one has finally been pulled; Retraction Watch has the details.
The most interesting part is that statement "The authors regret this approach", which I don't recall ever seeing in a situation like this. The comments at Retraction Watch build on this, and are quite interesting. There are many countries (and cultures) where it's considered acceptable (or at least a venial sin) to lift passages verbatim from other English-language papers when you're publishing in that language. I can see the attraction - I would hate to have to deliver a scientific manuscript in German, for example, which is the closest thing I have to a second language.
But I still wouldn't do it by copying and pasting big hunks of text, either. Reasons for resorting to that range from wanting to be absolutely sure that things are being expressed correctly in ones third or fourth language, all the way to "Isn't that how it's supposed to be done?" The latter situation obtains in parts of Asia, where apparently there's an emphasis in some schools on verbatim transcription of authoritative sources. There's an interesting cite to Yu Hua's China in Ten Words, where one of those ten words is "copycat" (shanzhai):
As a product of China’s uneven development, the copycat phenomenon has as many negative implications as it has positive aspects. The moral bankruptcy and confusion of right and wrong in China today, for example, and vivid expression in copycatting. As the copycat concept has gained acceptance, plagiarism, piracy, burlesque, parody, slander, and other actions originally seen as vulgar or illegal have been given a reason to exist; and in social psychology and public opinion they have gradually acquired respectability. No wonder that “copycat” has become one of the words most commonly used in China today. All of this serves to demonstrate the truth of the old Chinese saying: “The soil decides the crop, and the vine shapes the gourd.”
Four years ago I saw a pirated edition of [my novel] Brothers for sale on the pedestrian bridge that crosses the street outside my apartment; it was lying there in a stack of other pirated books. When the vendor noticed me running my eyes over his stock, he handed me a copy of my novel, recommending it as a good read. A quick flip through and I could tell at once that it was pirated. “No, it’s not a pirated edition,” he corrected me earnestly. “It’s a copycat.”
This tendency isn't a good fit with a lot of things, but it especially doesn't work out so well with scientific publication. I haven't seen it stated in so many words, but a key assumption is that every scientific paper is supposed to be different. If you take the time to read a new paper, you should learn something new and you should see something that you haven't seen before. It might be trivial, it might well be useless, but it should be at least slightly different from any other paper you've read or could find.
Now, as the Retraction Watch comments mention, some of these plagiarism cases are examples of "templating", where original (or sort of original) work was done, but the presentation of it was borrowed from an existing paper. That's not as bad as faking up results completely, of course, but you still have to wonder about the value of your work if you can lift big swaths of someone else's paper to describe it. Even when the manuscript itself has been written fresh from the ground up, there's plenty of stuff out in the literature like this. Someone gets an interesting reaction with a biphenyl and a zinc catalyst, and before you know it, there are all these quickie communications where someone else says "Hey, we got that with a napthyl", or "Hey, we got that with a boron halide catalyst". Technically, yes, these are different, but we're in the land of least publishable units now, where the salami is sliced so thinly that you can read a newspaper through it.
So the authors regret this approach, do they? So does everyone else.
+ TrackBacks (0) | Category: The Dark Side | The Scientific Literature
May 6, 2013
Here's a fine profile of Merck's Ken Frazier at Forbes. Matthew Herper does a good job of showing the hole that Merck has been slowly sliding into over the past few years, and wonders if Frazier is going to be able to drag the company out of it:
But it is clear that Frazier still views himself through the prism of his lawyerly training–he has not yet grown into a commanding and decisive chief executive. He’s scrupulous about not making anyone else look bad–working almost too hard in interviews to be clear that Perlmutter’s predecessor was not fired–and seems to be afraid to be seen as making too many big changes. “I am a person who does not subscribe to the hero-CEO school of thought,” he says. His persona is the culmination of the careful lessons he learned from his long climb to the top and his masterful legal defense against the lawsuits related to the pain pill Vioxx, which saved Merck and got him the top job. In order to be a great leader, he’s going to have to unlearn them.
I don't subscribe much to the hero-CEO school, either, at least not for a company the size of Merck. But even for a huge company, I think a rotten CEO can do a lot more harm than a good one can help (there's some thermodynamic way to express that, I'm sure). Frazier is certainly not in that category, and I've enjoyed some of the things he's had to say in the past (although I've also wondered about the follow-through). I wonder, though: how much of what Merck needs is in Frazier's power to do anything about? Or any one person's?
Update: here's David Shaywitz at Forbes, wondering about similar issues and what biopharma CEOs can actually do about them.
+ TrackBacks (0) | Category: Business and Markets
Here's the latest "medical periodic table", courtesy of this useful review in Chemical Communications. Element symbols in white are known to be essential in man. The ones with a blue background are found in the structures of known drugs, the orange ones are used in diagnostics, and the green ones are medically useful radioisotopes. (The paper notes that titanium and tantalum are colored blue due to their use in implants).
I'm trying to figure out a couple of these. Xenon I've heard of as a diagnostic (hyperpolarized and used in MRI of lung capacity), but argon? (The supplementary material for the paper says that argon plasms has been used locally to control bleeding in the GI tract). And aren't there marketed drugs with a bromine atom in them somewhere? At any rate, the greyed-out elements end up that way through four routes, I think. Some of them (francium, and other high-atomic-number examples) are just too unstable (and thus impossible to obtain) for anything useful to be done with them. Others (uranium) are radioactive, but have not found a use that other radioisotopes haven't filled already. Then you have the "radioactive but toxic) category, the poster child of which is plutonium. (That said, I'm pretty sure that popular reports of its toxicity are exaggerated, but it still ain't vanilla pudding). Then you have the nonradioactive but toxic crowd - cadmium, mercury, beryllium and so on. (There's another question - aren't topical mercury-based antiseptics still used in some parts of the world? And if tantalum gets on the list for metal implants, what about mercury amalgam tooth fillings?) Finally, you have elements that are neither hot not poisonous, but that no one has been able to find any medical use for (scandium, niobium, hafnium). Scandium and beryllium, in fact, are my nominees for "lowest atomic-numbered elements that many people have never heard of", and because of nonsparking beryllium wrenches and the like, I think scandium might win out. I've never found a use for it myself, either. I have used a beryllium-copper wrench (they're not cheap) in a hydrogenation room.
The review goes on to detail the various classes of metal-containing drugs, most prominent of them being, naturally, the platinum anticancer agents. There are ruthenium complexes in the clinic in oncology, and some work has been done with osmium and iridium compounds. Ferrocenyl compounds have been tried several times over the years, often put in place of a phenyl ring, but none of them (as far as I know) have made it into the general pharmacopeia. What I didn't know what that titanocene dichloride has been into the clinic (but with disappointing results). And arsenic compounds have a long (though narrow) history in medicinal chemistry, but have recently made something of a comeback. The thioredoxin pathway seems to be a good fit for exotic elements - there's a gadolinium compound in development, and probably a dozen other metals have shown activity of one kind or another, both in oncology and against things like malaria parasites.
Many of these targets, though, are in sort of a "weirdo metal" category in the minds of most medicinal chemists, and that might not reflect reality very well. There's no reason why metal complexes wouldn't be able to inhibit more traditional drug targets as well, but that brings up another concern. For example, there have been several reports of rhodium, iridium, ruthenium, and osmium compounds as kinase inhibitors, but I've never quite been able to see the point of them, since you can generally get some sort of kinase inhibitor profile without getting that exotic. But what about the targets where we don't have a lot of chemical matter - protein/protein interactions, for example? Who's to say that metal-containing compounds wouldn't work there? But I doubt if that's been investigated to any extent at all - not many companies have such things in their compound collections, and it still might turn out to be a wild metallic goose chase to even look. No one knows, and I wonder how long it might be before anyone finds out.
In general, I don't think anyone has a feel for how such compounds behave in PK and tox. Actually "in general" might not even be an applicable term, since the number and types of metal complexes are so numerous. Generalization would probably be dangerous, even if our base of knowledge weren't so sparse, which sends you right back into the case-by-case wilderness. That's why a metal-containing compound, at almost any biopharma company, would be met with the sort of raised eyebrow that Mr. Spock used to give Captain Kirk. What shots these things have at becoming drugs will be in nothing-else-works areas (like oncology, or perhaps gram-negative antibiotics), or against exotic mechanisms in other diseases. And that second category, as mentioned above, will be hard to get off the ground, since almost no one tests such compounds, and you don't find what you don't test.
+ TrackBacks (0) | Category: Cancer | Odd Elements in Drugs | Toxicology
May 3, 2013
There's a truly disturbing paper out in PLoSONE with potential implications for a lot of assay data out there in the literature. The authors are looking at the results of biochemical assays as a function of how the compounds are dispensed in them, pipet tip versus acoustic, which is the sort of idea that some people might roll their eyes at. But people who've actually done a lot of biological assays may well feel a chill at the thought, because this is just the sort of you're-kidding variable that can make a big difference.
Dispensing and dilution processes may profoundly influence estimates of biological activity of compounds. Published data show Ephrin type-B receptor 4 IC50 values obtained via tip-based serial dilution and dispensing versus acoustic dispensing with direct dilution differ by orders of magnitude with no correlation or ranking of datasets.
Lovely. There have been some alarm bells sounded before about disposable-pipet-tip systems. The sticky-compound problem is always out there, where various substances decide that they like the plastic walls of the apparatus a lot more than they like being in solution. That'll throw your numbers all over the place. And there have been concerns about bioactive substances leaching out of the plastic. (Those are just two recent examples - this new paper has several other references, if you're worried about this sort of thing).
This paper seems to have been set off by two recent AstraZeneca patents on the aforementioned EphB4 inhibitors. In the assay data tables, these list assay numbers as determined via both dispensing techniques, and they are indeed all over the place. One of the authors of this new paper is from Labcyte, the makers of the acoustic dispensing apparatus, and it's reasonable to suppose that their interactions with AZ called their attention to this situation. It's also reasonable to note that Labcyte itself has an interest in promoting acoustic dispensing technology, but that doesn't make the numbers any different. The fourteen compounds shown are invariably less potent via the classic pipet method, but by widely varying factors. So, which numbers are right?
The assumption would be that the more potent values have a better chance of being correct, because it's a lot easier to imagine something messing up the assay system than something making it read out at greater potency. But false positives certainly exist, too, so the authors used the data set to generate a possible pharmacophore for the compound series using both sets of numbers. And it turns out that the one from the acoustic dispensing runs gives you a binding model that matches pretty well with reality, while if you use the pipet data you get something broadly similar, but missing some important contributions from hydrophobic groups. That, plus the fact that the assay data shows a correlation with logP in the acoustic-derived data (but not so much with the pipet-derived numbers) makes it look like the sticky-compound effect might be what's operating here. But it's hard to be sure:
No previous publication has analyzed or compared such data (based on tip-based and acoustic dispensing) using computational or statistical approaches. This analysis is only possible in this study because there is data for both dispensing approaches for the compounds in the patents from AstraZeneca that includes molecule structures. We have taken advantage of this small but valuable dataset to perform the analyses described. Unfortunately it is unlikely that a major pharmaceutical company will release 100's or 1000's of compounds with molecule structures and data using different dispensing methods to enable a large scale comparison, simply because it would require exposing confidential structures. To date there are only scatter plots on posters and in papers as we have referenced, and critically, none of these groups have reported the effect of molecular properties on these differences between dispensing methods.
Some of those other references are to posters and meeting presentations, so this seems to be one of those things that floats around in the field without landing explicitly in the literature. One of the paper's authors was good enough to send along the figure shown, which brings some of these data together, and it's an ugly sight. This paper is probably doing a real service in getting this potential problem out into the cite-able world: now there's something to point at.
How many other datasets are hosed up because of this effect? Now there's an important question, and one that we're not going to have an answer for any time soon. For some sets of compounds, there may be no problems at all, while others (as that graphic shows) can be a mess. There are, of course, plenty of projects where the assay numbers seem (more or less) to make sense, but there are plenty of others where they don't. Let the screener beware.
Update: here's a behind-the-scenes look at how this paper got published. It was not an easy path into the literature, by any means.
Second update: here's more about this at Nature Methods.
+ TrackBacks (0) | Category: Drug Assays
May 2, 2013
The kinase inhibitor tivozanib (for renal cell carcinoma) was shot down this morning at an FDA committee hearing. There are going to be a lot of arguments about this decision, because feelings have been running high on both sides of the issue.
And this has been an issue for over a year now. As that FierceBiotech story puts it:
Tivozanib hit its primary endpoint, demonstrating a slim but statistically significant improvement in progression-free-survival of patients with advanced renal cell carcinoma when compared to Nexavar (sorafenib). But the sorafenib arm experienced a slightly better overall survival rate, and Aveo has been trying to explain it away ever since.
The developer had to start in the spring of 2012 at a pre-NDA meeting. According to the review document, "the FDA expressed concern about the adverse trend in overall survival in the single Phase III trial and recommended that the sponsor conduct a second adequately powered randomized trial in a population comparable to that in the US."
The Phase III in question was performed in Eastern Europe, and one of the outcomes of today's decision may be a reluctance to rely on that part of the world for pivotal trials. I'm honestly not sure how much of tivozanib's problems were due to that (if the data had been stronger, no one would be wondering). But if the patient population in the trial was far enough off the intended US market to concern the FDA, then there was trouble coming from a long way away.
Aveo, though, may not have had many options by this time. This is one of those situations where a smaller company has enough resources to barely get something through Phase III, so they try to do it as inexpensively as they can (thus Eastern Europe). By the time things looked dicey, there wasn't enough cash to do anything over, so they took what they had to the FDA and hoped for the best. The agency's suggestion to do a US trial must have induced some despair, since (1) they apparently didn't have the money to do it, and (2) this meant that the chances of approval on the existing data were lower than they'd hoped.
One of the other big issues that this decision highlights is in trial design. This was a "crossover" trial, where patients started out on one medication and then could be switched to another as their condition progressed. So many crossed over to the comparison drug (Nexavar, sorafenib) that it seems to have impaired the statistics of the trial. Were the overall survival numbers slightly better in the eventual Nexavar group because they'd been switched to that drug, or because they'd gotten tivozanib first? That's something you'd hope that a more expensive/well-run Phase III would have addressed, but in the same way that this result casts some doubt on the Eastern European clinical data, it casts some doubt on crossover trial design in this area.
Update: a big problem here was that there were many more patients who crossed over to tivozanib from Nexavar than the other way around. That's a design problem for you. . .
What a mess - and what a mess for Aveo, and their investors. I'm not sure if they've got anything else; it looks like they'd pretty much bet the company on this. Which must have been like coming to the showdown at the poker table with a low three-of-a-kind, knowing that someone else probably has it beat. . .
+ TrackBacks (0) | Category: Cancer | Clinical Trials | Regulatory Affairs
I've been reading E. O. Wilson's new book, Letters to a Young Scientist. It's the latest addition to the list of "advice from older famous scientists" books, which also includes Peter Medawar's similarly titled Advice To A Young Scientist and what is probably the grandfather of the entire genre, Ramón y Cajal's Advice for a Young Investigator. A definite personal point of view comes across in this one, since its author is famously unafraid to express his strongly held opinions. There's some 100-proof Wilson in this book as well:
. . .Science is the wellspring of modern civilization. It is not just "another way of knowing", to be equated with religion or transcendental meditation. It takes nothing away from the genius of the humanities, including the creative arts. Instead it offers ways to add to their content. The scientific method has been consistent better than religious beliefs in explaining the origin and meaning of humanity. The creation stories of organized religions, like science, propose to explain the origin of the world, the content of the celestial sphere, and even the nature of time and space. These mythic accounts, based mostly on the dreams and epiphanies of ancient prophets, vary from one religion's belief to another. Colorful they are, and comforting to the minds of believers, but each contradicts all the others. And when tested in the real world they have so far proved wrong, always wrong.
And that brings up something else about all the books of this type: they're partly what their titles imply, guides for younger scientists. They're partly memoirs of their authors' lives (Francis Crick's What Mad Pursuit is in this category, although it has a lot of useful advice itself). And they're all attempts to explain what science really is and how it really works, especially to readers who may well not be scientists themselves.
Wilson does some of all three here, although he uses examples from his own life and research mainly as examples of the advice he's giving. And that advice, I think, is almost always on target. He has sections on how to pick areas of research, methods to use for discovery, how to best spend your time as a scientist, and so on. The book is absolutely, explicitly aimed at those who want to make their mark by discovering new things, not at those who would wish to climb other sorts of ladders. (For example, he tells academic scientists "Avoid department-level administration beyond thesis committee chairmanships if at all fair and possible. Make excuses, dodge, plead, trade." If your ambition is to become chairman of the department or a VP of this or that, this is not the book to turn to.
But I've relentlessly avoided being put onto the managerial track myself, so I can relate to a lot of what this book has to say. Wilson spent his life at Harvard, so much of his advice has an academic slant, but the general principles of it come through very clearly. Here's how to pick an area to concentrate on:
I believe that other experienced scientists would agree with me that when you are selecting a domain of knowledge in which to conduct original research, it is wise to look for one that is sparsely inhabited. . .I advise you to look for a chance to break away, to find a subject you can make your own. . .if a subject is already receiving a great deal of attention, if it has a glamorous aura, if its practitioners are prizewinners who receive large grants, stay away from that subject.
One of the most interesting parts of the book for me is its take on two abilities that most lay readers would take as prerequisites for a successful scientist: mathematical ability and sheer intelligence in general. The first is addressed very early in the book, in what may well become a famous section:
. . .If, on the other hand, you are a bit short in mathematical training, even very short, relax. You are far from alone in the community of scientists, and here is a professional secret to encourage you: many of the most successful scientists in the world today are mathematically no more than semiliterate.
He recommends making up this deficiency, as much as you find it feasible to do so, but he's right. The topic has come up around here - I can tell you for certain that the math needed to do medicinal chemistry is not advanced, and mostly consists of being able to render (and understand) data in a variety of graphical forms. If you can see why a log/log plot tends to give you straightened-out lines, you've probably got enough math to do med-chem. You'll also need to understand something about statistics, but (again) mostly in how to interpret it so you aren't fooled by data. Pharmacokinetics gets a bit more mathematical, and (naturally) molecular modeling itself is as math-heavy as anyone could want, but the chemistry end of things is not.
As for intelligence, see what you think about this:
Original discoveries cannot be made casually, not by anyone at any time or anywhere. The frontier of scientific knowledge, often referred to as the cutting edge, is reached with maps drawn by earlier investigators. . .But, you may well ask, isn't the cutting edge a place only for geniuses? No, fortunately. Work accomplished on the frontier defines genius, not just getting there. In fact, both accomplishments along the frontier and the final eureka moment are achieved more by entrepreneurship and hard work than by native intelligence. This is so much the case that in most fields most of the time, extreme brightness may be a detriment. It has occurred to me, after meeting so many successful researchers in so many disciplines, that the ideal scientist is smart only in an intermediate degree: bright enough to see what can be done but not so bright as to become bored doing it.
By "entrepreneurship", he doesn't mean forming companies. That's Wilson's term for opportunistic science - setting up some quick and dirty experiments around a new idea to see what might happen, and being open to odd results as indicators of a new direction to take your work. I completely endorse that, in case anyone cares. As for the intelligence part, you have to keep in mind that this is E. O. Wilson telling you that you don't need to be fearsomely intelligent to be successful, and that his scale for evaluating this quality might be calibrated a bit differently from the usual. As Tom Wolfe put it in his essay in Hooking Up, one of Wilson's defining characteristics has been that you could put him down almost anywhere on Earth and he'd be the smartest person in the room. (I should note that Wolfe's essay overall is not exactly a paean, but he knows not to underestimate the guy).
I think that intelligence falls under the "necessary but not sufficient" heading. And I probably haven't seen that many people operate whom the likes of E. O. Wilson would consider extremely smart, so I can't comment much on what happens at that end of the scale. But the phenomenon of people who score very highly on attempted measures of intelligence, but never seem to make much of themselves, is so common as to be a cliché. You cannot be dumb and make a success of yourself as a research scientist. But being smart guarantees nothing.
As an alternative to mathematical ability and (very) high intelligence, Wilson offers the prescription of hard work. "Scientists don't take vacations", he says, they take field trips. That might work out better if you're a field biologist, but not so well for (say) organic chemistry. And actually, I think that clearing your head with some time off actually can help out a great deal when you're bogged down in some topic. But having some part of your brain always on the case really is important. Breaks aside, long-term sustained attention to a problem is worth a lot, and not everyone is capable of it.
Here's more on the opportunistic side of things:
Polymer chemistry, computer programs of biological processes, butterflies of the Amazon, galactic maps, and Neolithic sites in Turkey are the kinds of subjects worthy of a lifetime of devotion. Once deeply engaged, a steady stream of small discoveries is guaranteed. But stay alert for the main chance that lies to the side. There will always be the possibility of a major strike, some wholly unexpected find, some little detail that catches your peripheral attention that might very well, if followed, enlarge or even transform the subject you have chosen. If you sense such a possibility, seize it. In science, gold fever is a good thing.
I know exactly what he's talking about here, and I think he's completely right. Many, many big discoveries have their beginnings in just this sort of thing. Isaac Asimov was on target when he said that the real sound of a breakthrough was not the cry of "Eureka!" but a puzzled voice saying "Hmm. That's funny. . ."
Well, the book has much more where all this comes from. It's short, which tempts a person to read through it quickly. I did, and found that this slighted some of the points it tries to make. It improved on a second pass, in my case, so you may want to keep this in mind.
+ TrackBacks (0) | Category: Book Recommendations | Who Discovers and Why
May 1, 2013
I'm going to be traveling today, mostly through airports without good Wi-Fi (for which read "Wi-Fi that they don't want me to pay $10 for during my 90-minute layover"). But I wanted to put out a question sent in by a reader that I think would be worthwhile:
What are the best web sites for a medicinal chemist to have bookmarked? Resources for medicine and biology, organic chemistry, analytical chemist, and pharma development would be appropriate. There are shorter lists available here and there, but I don't think that there's One Big List that easily findable, and I think that there needs to be one. Suggestions in the comments - that should put together something pretty useful.
+ TrackBacks (0) | Category: Blog Housekeeping
April 30, 2013
I've had a few people send along this article, on the possible toxicological effects of the herbicide glyphosate, wondering what I make of it as a medicinal chemist. It's getting a lot of play in some venues, particularly the news-from-Mother-Nature outlets. After spending some time reading this paper over, and looking through the literature, I've come to a conclusion: it is, unfortunately, a load of crap.
The authors believe that glyphosate is responsible for pretty much every chronic illness in humans, and a list of such is recited several times during the course of the long, rambling manuscript. Their thesis is that the compound is an inhibitor of the metabolizing CYP enzymes, of the biosynthesis of aromatic amino acids by gut bacteria, and of sulfate transport. But the evidence given for these assertions, and their connection with disease, while it might look alarming and convincing to someone who has never done research or read a scientific paper, is a spiderweb of "might", "could", "is possibly", "associated with", and so on. The minute you look at the actual evidence, things disappear.
Here's an example - let's go right to the central thesis that glyphosate inhibits CYP enzymes in the liver. Here's a quote from the paper itself:
A study conducted in 1998 demonstrated that glyphosate inhibits cytochrome P450 enzymes in plants . CYP71s are a class of CYP enzymes which play a role in detoxification of benzene compounds. An inhibitory effect on CYP71B1l extracted from the plant, Thlaspi arvensae, was demonstrated through an experiment involving a reconstituted system containing E. coli bacterial membranes expressing a fusion protein of CYP71B fused with a cytochrome P450 reductase. The fusion protein was assayed for activity level in hydrolyzing a benzo(a)pyrene, in the presence of various concentrations of glyphosate. At 15 microM concentration of glyphosate, enzyme activity was reduced by a factor of four, and by 35 microM concentration enzyme activity was completely eliminated. The mechanism of inhibition involved binding of the nitrogen group in glyphosate to the haem pocket in the enzyme.
A more compelling study demonstrating an effect in mammals as well as in plants involved giving rats glyphosate intragastrically for two weeks . A decrease in the hepatic level of cytochrome P450 activity was observed. As we will see later, CYP enzymes play many important roles in the liver. It is plausible that glyphosate could serve as a source for carcinogenic nitrosamine exposure in humans, leading to hepatic carcinoma. N-nitrosylation of glyphosate occurs in soils treated with sodium nitrite , and plant uptake of the nitrosylated product has been demonstrated . Preneoplastic and neoplastic lesions in the liver of female Wistar rats exposed to carcinogenic nitrosamines showed reduced levels of several CYP enzymes involved with detoxification of xenobiotics, including NADPH-cytochrome P450 reductase and various glutathione transferases . Hence this becomes a plausible mechanism by which glyphosate might reduce the bioavailability of CYP enzymes in the liver.
Glyphosate is an organophosphate. Inhibition of CYP enzyme activity in human hepatic cells is a well-established property of organophosphates commonly used as pesticides . In , it was demonstrated that organophosphates upregulate the nuclear receptor, constitutive androstane receptor (CAR), a key regulator of CYP activity. This resulted in increased synthesis of CYP2 mRNA, which they proposed may be a compensation for inhibition of CYP enzyme activity by the toxin. CYP2 plays an important role in detoxifying xenobiotics .
Now, that presumably sounds extremely detailed and impressive if you don't know any toxicology. What you wouldn't know from reading through all of it is that their reference 121 actually tested glyphosate against human CYP enzymes. In fact, you wouldn't know that anyone has ever actually done such an experiment, because all the evidence adduced in the paper is indirect - this species does that, so humans might do this, and this might be that, because this other thing over here has been shown that it could be something else. But the direct evidence is available, and is not cited - in fact, it's explicitly ignored. Reference 121 showed that glyphosate was inactive against all human CYP isoforms except 2C9, where it had in IC50 of 3.7 micromolar. You would also not know from this new paper that there is no way that ingested glyphosate could possibly reach levels in humans to inhibit CYP2C9 at that potency.
I'm not going to spend more time demolishing every point this way; this one is representative. This paper is a tissue of assertions and allegations, a tendentious brief for the prosecution that never should have been published in such a form in any scientific journal. Ah, but it's published in the online journal Entropy, from the MDPI people. And what on earth does this subject have to do with entropy, you may well ask? The authors managed to work that into the abstract, saying that glyphosate's alleged effects are an example of "exogenous semiotic entropy". And what the hell is that, you may well ask? Why, it's a made-up phrase making its first appearance, that's what it is.
But really, all you need to know is that MDPI is the same family of "journals" that published the (in)famous Andrulis "Gyres are the key to everything!" paper. And then made all kinds of implausible noises about layers of peer review afterwards. No, this is one of the real problems with sleazy "open-access" journals. They give the whole idea of open-access publishing a black eye, and they open the floodgates to whatever ridiculous crap comes in, which then gets "peer reviewed" and "published" in an "actual scientific journal", where it can fool the credulous and mislead the uninformed.
+ TrackBacks (0) | Category: The Scientific Literature | Toxicology
I'm in Madison, Wisconsin, where I'll be giving the Organic Chemistry McElvain Seminar later on today. The title of my talk, which I'm not sure if I'll live up to or not, is "Medicinal Chemistry: Getting Old, Or Just Starting to Grow Up?". It's at 3:30 in the Seminar Hall, room 1315, if you're passing through (!)
+ TrackBacks (0) | Category: Blog Housekeeping
April 29, 2013
There's been a lot of rumbling recently about the price of new cancer drugs (see this article for a very typical reaction). It's a topic that's come up around here many times, as would be only natural - scrolling back in this category will turn up a whole list of posts.
I see that Bernard Munos has weighed in on the topic in Forbes. He's not being Doctor Feelgood about it, either:
All this adds up to a giant pushback against the astronomical drug prices that are becoming commonplace. It seems that price tags of $100,000 or above are becoming the norm. Of 12 cancer drugs approved in 2012, 11 cost more than that. As more drugs are offered at that level and their sponsors get away with it, it seems to set a floor that emboldens drug companies to push the envelope. They are badly misjudging the brewing anger.
The industry’s standard defense has been to run warm-hearted stories about the wonders of biomedical innovation, and to point out that drugs represent only 10% of healthcare costs. Both arguments miss the point. Everyone loves biomedical innovation, but the industry’s annual output of 25 to 35 new drugs is a lousy return for its $135 billion R&D spending. . .
That's a real problem. We in the industry concentrate on our end of it, where we wonder how we can spend this much for our discovery efforts and survive. But there are several sides to the issue. From one angle, as long as we can jack up the prices high enough on what does get through, we can (in theory) stay in business. That's not going to happen. There are limits to what we can charge, and we're starting to bang up against them, in the way that a Martingale player at a roulette table learns why casinos have betting limits at the tables. It's not a fun barrier to bump into.
And there's the problem Munos brings up, which is one that investors have been getting antsy about for some time: return on capital. The huge amounts of money going out the door are (at least in some cases) not sustainable. But we're not spending our money as if there were a problem:
Perhaps the mood would be different if the industry was a model of efficiency, but this is hardly the case. Examples of massive waste are on display everywhere: Pfizer wants to flatten a 750,000-square-foot facility in Groton, CT, and won’t entertain proposals for alternative uses. Lilly writes off over $100 million for a half-built insulin plant in Virginia, only to restart the project a few years later in Indiana. AstraZeneca shutters its R&D labs at Alderley Park and goes on to spend $500 million on a new facility in Cambridge.
Munos is right. We have enough trouble already without asking for more. Don't we?
+ TrackBacks (0) | Category: Cancer | Drug Prices | Why Everyone Loves Us
That Lamar Smith proposal I wrote about earlier this morning can be summarized as "Why don't you people just work on the good stuff?" And I thought it might be a good time to link back to a personal experience I had with just that worldview. As you'll see from that story, all they wanted was for us to meet the goals that we put down on our research goals forms. I was told, face to face, that the idea was that this would make us put our efforts into the projects that were most likely to succeed. Who could object to that? Right?
But since we here in the drug industry are so focused on making money, y'know, you'd think that we would have even more incentives to make sure that we're only working on the things that are likely to pay off. And we can't do it. Committees vet proposals, managers look over progress reports, presentations are reviewed and data are sifted, all to that end, because picking the wrong project can sink you good and proper, while picking the right one can keep you going for years to come. But we fail all the time. A good 90% of the projects that make it into the clinic never make it out the other end, and the attrition even before getting into man is fierce indeed. We back the wrong horses for the best reasons available, and sometimes we back the right ones for reasons that end up evaporating along the way. This is the best we can do, the state of the art, and it's not very good at all.
And that's in applied research, with definite targets and endpoints in mind the whole way through. Now picture what it's like in the basic research end of things, which is where a lot of NSF and NIH money is (and should be) going. It is simply not possible to say where a lot of these things are going, and which ones will bear fruit. If you require everyone to sign forms saying that Yes, This Project Has Immediate Economic and National Security Impact, then the best you can hope for is to make everyone lie to you.
Update: a terrific point from the comments section: "(This) argument was often made when firms were reducing costs by shutting down particular pieces of R&D. The general idea was that the firm would stop doing the things that were unlikely to work, and focus more on the things that would work, and hence improve financial returns on R&D. This argument is implausible because successful R&D is wildly profitable. Financial returns are only dragged down by the things that don't work. Therefore, any company that could REALLY distinguish with any precision between winners and losers on a prospective basis should double or triple its R&D investment, and not cut it."
+ TrackBacks (0) | Category: Current Events | Who Discovers and Why
This is a bad idea: Representative Lamar Smith (R-TX) is circulating a draft of a bill to change the way the National Science Foundation reviews grant applications. Science magazine obtained a copy of the current version, and it would require the NSF to certify that all research it funds is:
1) "…in the interests of the United States to advance the national health, prosperity, or welfare, and to secure the national defense by promoting the progress of science;
2) "… the finest quality, is groundbreaking, and answers questions or solves problems that are of utmost importance to society at large; and
3) "…not duplicative of other research projects being funded by the Foundation or other Federal science agencies."
If we could fund things this way, we would be living in a different world entirely. Research, though, does not and cannot follow these guidelines. A lot of stuff gets looked into that doesn't work out, and a lot of things that do work out don't look like they're ever going to be of much use for anything. We are not smart enough to put bets down on only the really important stuff up front - and by "we", I mean the entire scientific community, and the director of the NSF, and even Representative Lamar Smith.
Useless and even bizarre things get funded under the current system, of that I have no doubt. But telling everyone that all research has to be certified as good for something is silly grandstanding. What will happen is that people will rewrite their grant applications in order to make them look attractive under whatever rules apply - which, naturally, is how it's always worked. So I'm not saying that Rep. Smith's proposal would Destroy Science in America. That would take a lot more work. No, what I'm saying is that Rep. Smith's view of the world is flawed. He seems to believe that legislation of this sort is the answer to large, difficult problems (witness his work on the Stop Online Piracy Act). As such, he would seem to be exactly the sort of person that I wish could be barred from serving as an elected official.
If I were Lamar Smith, I would probably be thinking of a bill that I could introduce to that effect (the Stop Overreaching Legislators Act?) But I'm not the sort of person who thinks that the world can be fixed up by passing the right laws and signing the right papers. I'm more in line with Mark Twain, when he said that no one's life, liberty, or property was safe while the legislature was in session.
Note: more thoughts added here, later in the day
+ TrackBacks (0) | Category: Current Events
April 26, 2013
A couple of years back, I wrote about the egregious research fraud case of Diederick Stapel. Here's an extraordinary follow-up in the New York Times Magazine, which will give you the shivers. Here, try this part out:
In one experiment conducted with undergraduates recruited from his class, Stapel asked subjects to rate their individual attractiveness after they were flashed an image of either an attractive female face or a very unattractive one. The hypothesis was that subjects exposed to the attractive image would — through an automatic comparison — rate themselves as less attractive than subjects exposed to the other image.
The experiment — and others like it — didn’t give Stapel the desired results, he said. He had the choice of abandoning the work or redoing the experiment. But he had already spent a lot of time on the research and was convinced his hypothesis was valid. “I said — you know what, I am going to create the data set,” he told me. . .
. . .Doing the analysis, Stapel at first ended up getting a bigger difference between the two conditions than was ideal. He went back and tweaked the numbers again. It took a few hours of trial and error, spread out over a few days, to get the data just right.
He said he felt both terrible and relieved. The results were published in The Journal of Personality and Social Psychology in 2004. “I realized — hey, we can do this,” he told me.
And that's just what he did, for the next several years, leading to scores of publications and presentations on things he had just made up. In light of that Nature editorial statement I mentioned yesterday, this part seems worth thinking on:
. . . The field of psychology was indicted, too, with a finding that Stapel’s fraud went undetected for so long because of “a general culture of careless, selective and uncritical handling of research and data.” If Stapel was solely to blame for making stuff up, the report stated, his peers, journal editors and reviewers of the field’s top journals were to blame for letting him get away with it. The committees identified several practices as “sloppy science” — misuse of statistics, ignoring of data that do not conform to a desired hypothesis and the pursuit of a compelling story no matter how scientifically unsupported it may be.
The adjective “sloppy” seems charitable. . .
It may well be. The temptation of spicing up the results is always there, in any branch of science, and it's our responsibility to resist it. That means not only resisting the opportunities to fool others, it means resisting fooling ourselves, too, because who would know better what we'd really like to hear? Reporting only the time that the idea worked, not the other times when it didn't. Finding ways to explain away the data that would invalidate your hypothesis, but giving the shaky stuff in your favor the benefit of the doubt. N-of-1 experiments taken as facts. No, not many people will go as far as Diederick Stapel (or could, even if they wanted to - he was quite talented at fakery). Unfortunately, things go on all the time that might differ from him in degree, but not in kind.
+ TrackBacks (0) | Category: The Dark Side | The Scientific Literature
I wanted to mention a project of Prof. Phil Baran of Scripps and his co-authors, Yoshihiro Ishihara and Ana Montero. It's called the Portable Chemist's Consultant, and it's available for iPads here. And here's a web-based look at its features. Baran was good enough to send me an evaluation copy, so I've had a chance to look through it in detail.
It's clearly based on his course in heterocyclic chemistry, and the chapters on pyridines and other heterocycles read like very well-thought-out review articles. But they also take advantage of the iPad's interface, in that specific transformations are shown in detail (with color and animation), and each of these can be expanded to a wider presentation and a thorough list of references (which are linked in their turn). The "Consumer Reports" style tables of recommended synthetic methods at the end of each section seem very useful, too, although they might need some notation for how much experimental support there is for each combination. For an overview of these topics, though, I doubt if anyone could do this better; I became a more literate heterocyclic chemist just by flipping through things. (Here's a video clip of some of these features in action).
So, do I have any reservations? A few. One of the bigger ones (which I'm told that Baran and his team are addressing) might sound trivial: I'm not sure about the title. As it stands, "The Portable Heterocyclic Chemistry Consultant" would be a much more accurate one, because there are large swaths of chemistry that fall within its current subtitle ("A Survival Guide for Discovery, Process, and Radiolabeling") which are not even touched on. For example, scale-up chemistry is mentioned on the cover, but in the current version of the book I didn't really see anything that was of particular relevance to actual scale-up work (things like the feasibility of solvent switching, heat transfer effects and reaction thermodynamics, run-to-run variability and potential purification methods, reagent sourcing, etc.) For medicinal chemists, I can say that the focus is completely on just the synthetic organic end of things; there's nothing on the behavior of any of the heterocyclic systems in vivo (pharmacokinetic trends, routes of metabolism, known toxicity problems, and so on). There's also nothing on spectral characterization, or any analytical chemistry of any sort, and I found no mention of radiolabeling (although I'd be glad to be corrected on that).
So for these reasons, it's a very academic work, but a very good one of its type. And Prof. Baran tells me that it's being revised constantly (at no charge to previous purchasers), and that these sorts of topics are in the works for later versions. If this book is indeed one of those gifts that keeps on giving, then it's a bargain as it stands, but (at the same time) I think that potential buyers should be aware of what they're getting in the current version.
My second reservation is technological. The book is only available on the iPad, and I'm not completely sure that this is a good idea. There's no way that it could be as useful in print, but a web-based interface would still be fine. (Managing ownership and sales is a lot easier in Apple's ecosystem, to be sure). And I'm not sure how many organic chemists own iPads yet. Baran himself seemed a bit surprised when he found out that I don't own one myself (I borrowed a colleague's to have a look). The most common reaction I've had when I tell people about the "PCC" is to say that they don't own an iPad, either, and to ask if there's any other way they can read it. Another problem is that the people that do have iPads certainly don't take them to the lab bench, which is where a work like this would be most useful. On the other hand, plain old computers are ubiquitous at the bench, thanks to electronic lab notebooks and the like.
All this said, though, if you do own an iPad and need to know about heterocyclic chemistry, you should have a look at this work immediately. If not, well, it's well worth keeping an eye on - these are early days.
+ TrackBacks (0) | Category: Book Recommendations | Chemical News
April 25, 2013
Earlier this year, I wrote about a method to do NMR experiments at the cellular level or below. A new paper uses this same phenomenon (nitrogen-vacancy defects near the surface of diamond crystals) to do magnetic imaging of individual bacteria.
It's well known that many bacteria have "magnetosome" structures that allow them to sense and react to magnetic fields. If you let them wander over the surface of one of these altered diamond crystals, you can use the single-atom unpaired electrons as sensors. This team (several groups at Harvard and at Berkeley) were able to get sub-cellular resolution, and correlate that with real-time optical images of the bacteria (Magnetospirillum magneticum). It's very odd to see images of single bacteria with their field strengths looking like little bar magnets, but there they are. What we'll find by looking at magnetic fields inside individual cells, I have absolutely no idea, but I hope for all kinds of interesting and baffling things. I wonder what you'd get when mammalian cells take up magnetic nanoparticles, for example?
In other news, it's already late April, and things are already far enough along for me to talk about something on the blog as having happened "earlier this year". Sheesh.
+ TrackBacks (0) | Category: General Scientific News
This has to be a good thing. From the latest issue of Nature comes news of an initiative to generate more reproducible papers:
From next month, Nature and the Nature research journals will introduce editorial measures to address the problem by improving the consistency and quality of reporting in life-sciences articles. To ease the interpretation and improve the reliability of published results we will more systematically ensure that key methodological details are reported, and we will give more space to methods sections. We will examine statistics more closely and encourage authors to be transparent, for example by including their raw data. . .
. . .We recognize that there is no single way to conduct an experimental study. Exploratory investigations cannot be done with the same level of statistical rigour as hypothesis-testing studies. Few academic laboratories have the means to perform the level of validation required, for example, to translate a finding from the laboratory to the clinic. However, that should not stand in the way of a full report of how a study was designed, conducted and analysed that will allow reviewers and readers to adequately interpret and build on the results.
I hope that Science, the Cell journals at Elsevier, and other other leading outlets for such results will follow through with something similar. In this time of online supplementary info and basically unlimited storage ability, there's no reason not to disclose as much information as possible in a scientific publication. And the emphasis on statistical rigor and possible sources of error is just what's needed as well. Let's see who follows suit first, and congratulate them. And let's see who fails to respond, and treat them appropriately, too.
+ TrackBacks (0) | Category: The Scientific Literature
A lot of people (and I'm one of them) have been throwing the word "epigenetic" around a lot. But what does it actually mean - or what is it supposed to mean? That's the subject of a despairing piece from Mark Ptashne of Sloan-Kettering in a recent PNAS. He noted this article in the journal, one of their "core concepts" series, and probably sat down that evening to write his rebuttal.
When we talk about the readout of genes - transcription - we are, he emphasizes, talking about processes that we have learned many details about. The RNA Polymerase II complex is very well conserved among living organisms, as well it should be, and its motions along strands of DNA have been shown to be very strongly affected by the presence and absence of protein transcription factors that bind to particular DNA regions. "All this is basic molecular biology, people", he does not quite say, although you can pick up the thought waves pretty clearly.
So far, so good. But here's where, conceptually, things start going into the ditch:
Patterns of gene expression underlying development can be very complex indeed. But the underlying mechanism by which, for example, a transcription activator activates transcription of a gene is well understood: only simple binding interactions are required. These binding interactions position the regulator near the gene to be regulated, and in a second binding reaction, the relevant enzymes, etc., are brought to the gene. The process is called recruitment. Two aspects are especially important in the current context: specificity and memory.
Specificity, naturally, is determined by the location of regulatory sequences within the genome. If you shuffle those around deliberately, you can make a variety of regulators work on a variety of genes in a mix-and-match fashion (and indeed, doing this is the daily bread of molecular biologists around the globe). As for memory, the point is that you have to keep recruiting the relevant enzymes if you want to keep transcribing; these aren't switchs that flips on or off forever. And now we get to the bacon-burning part:
Curiously, the picture I have just sketched is absent from the Core Concepts article. Rather, it is said, chemical modifications to DNA (e.g., methylation) and to histones— the components of nucleosomes around which DNA is wrapped in higher organisms—drive gene regulation. This obviously cannot be true because the enzymes that impose such modifications lack the essential specificity: All nucleosomes, for example, “look alike,” and so these enzymes would have no way, on their own, of specifying which genes to regulate under any given set of conditions. . .
. . .Histone modifications are called “epigenetic” in the Core Concepts article, a word that for years has implied memory . . . This is odd: It is true that some of these modifications are involved in the process of transcription per se—facilitating removal and replacement of nucleosomes as the gene is transcribed, for example. And some are needed for certain forms of repression. But all attempts to show that such modifications are “copied along with the DNA,” as the article states, have, to my knowledge, failed. Just as transcription per se is not “remembered” without continual recruitment, so nucleosome modifications decay as enzymes remove them (the way phosphatases remove phosphates put in place on proteins by kinases), or as nucleosomes, which turn over rapidly compared with the duration of a cell cycle, are replaced. For example, it is simply not true that once put in place such modifications can, as stated in the Core Concepts article, “lock down forever” expression of a gene.
Now it does happen, Ptashne points out, that some developmental genes, once activated by a transcription factor, do seem to stay on for longer periods of time. But this takes place via feedback loops - the original gene, once activated, produces the transcription factor that causes another gene to be read off, and one of its products is actually the original transcription factor for the first gene, which then causes the second to be read off again, and so on, pinging back and forth. But "epigenetic" has been used in the past to imply memory, and modifying histones is not a process with enough memory in it, he says, to warrant the term. They are ". . .parts of a response, not a cause, and there is no convincing evidence they are self-perpetuating".
What we have here, as Strother Martin told us many years ago, is a failure to communicate. The biologists who have been using the word "epigenetic" in its original sense (which Ptashne and others would tell you is not only the original sense, but the accurate and true one), have seen its meaning abruptly hijacked. (The Wikipedia entry on epigenetics is actually quite good on this point, or at least it was this morning). A large crowd that previously paid little attention to these matters now uses "epigenetic" to mean "something that affects transcription by messing with histone proteins". And as if that weren't bad enough, articles like the one that set off this response have completed the circle of confusion by claiming that these changes are somehow equivalent to genetics itself, a parallel universe of permanent changes separate from the DNA sequence.
I sympathize with him. But I think that this battle is better fought on the second point than the first, because the first one may already be lost. There may already be too many people who think of "epigenetic" as meaning something to do with changes in expression via histones, nucleosomes, and general DNA unwinding/presentation factors. There really does need to be a word to describe that suite of effects, and this (for better or worse) now seems as if it might be it. But the second part, the assumption that these are necessarily permanent, instead of mostly being another layer of temporary transcriptional control, that does need to be straightened out, and I think that it might still be possible.
+ TrackBacks (0) | Category: Biological News
April 24, 2013
The University of Chicago Press has sent along a copy of a new book by DePaul professor Ted Anton, The Longevity Seekers. It's a history of the last thirty years or so of advances in understanding the biochemical pathways of aging. As you'd imagine, much of it focuses on sirtuins, but many other discoveries get put into context as well. There are also thoughts on what this whole story tells us about medical research, the uses of model animal systems, about the public's reaction to new discoveries, and what would happen if (or when) someone actually succeeds in lengthening human lifespan. (That last part is an under-thought topic among people doing research in the field, in my experience, at least in print).
Readers will be interested to note that Anton uses posts and comments on this blog as source material in some places, when he talks about the reaction in the scientific community to various twists and turns in the story. (You'll be relieved to hear that he's also directly interviewed almost all the major players in the field, as well!) If you're looking for a guide to how the longevity field got to where it is today and how everything fits together so far, this should get you up to speed.
+ TrackBacks (0) | Category: Aging and Lifespan | Book Recommendations