Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

October 20, 2014

Compound Properties: Starting a Renunciation

Email This Entry

Posted by Derek

I've been thinking a lot recently about compound properties, and what we use them for. My own opinions on this subject have been changing over the years, and I'm interested to see if I have any company on this.

First off, why do we measure things like cLogP, polar surface area, aromatic ring count, and all the others? A quick (and not totally inaccurate) answer is "because we can", but what are we trying to accomplish? Well, we're trying to read the future a bit and decrease the horrendous failure rates for drug candidates, of course. And the two aspects that compound properties are supposed to help with are PK and tox.

Of the two, pharmacokinetics is the one with the better shot at relevance. But how fine-grained can we be with our measurements? I don't think it's controversial to say that compounds with really high cLogP values are going to have, on average, more difficult PK, for various reasons. Compounds with lots of aromatic rings in them are, on average, going to have more difficult PK, too. But how much is "lots" or "really high"? That's the problem, because I don't think that you can draw a useful line and say that things on one side of it are mostly fine, and things on the other are mostly not. There's too much overlap, and too many exceptions. The best you can hope for, if you're into line-drawing, is to draw one up pretty far into the possible range and say that things below it may or may not be OK, but things above it have a greater chance of being bad. (This, to my mind, is all that we mean by all the "Rule of 5" stuff). But what good does that do? Everyone doing drug discovery already knows that much, or should. Where we get into trouble is when we treat these lines as if they were made of electrified barbed wire.

That's because of a larger problem with metrics aimed at PK: PK is relatively easy data to get. When in doubt, you should just dose the compound and find out. This makes predicting PK problems a lower-value proposition - the real killer application would be predicting toxicology problems. I fear that over the years many rule-of-five zealots have confused these two fields, out of a natural hope that something can be done about the latter (or perhaps out of thinking that the two are more related than they really are). That's unfortunate, because to my mind, this is where compound property metrics get even less useful. That recent AstraZeneca paper has had me thinking, the one where they state that they can't reproduce the trends reported by Pfizer's group on the influences of compound properties. If you really can take two reasonably-sized sets of drug discovery data and come to opposite conclusions about this issue, what hope does this approach have?

Toxicology is just too complicated, I think, for us to expect that any simple property metrics can tell us enough to be useful. That's really annoying, because we could all really use something like that. But increasingly, I think we're still on our own, where we've always been, and that we're just trying to make ourselves feel better when we think otherwise. That problem is particularly acute as you go up the management ladder. Avoiding painful tox-driven failures is such a desirable goal that people are tempted to reach for just about anything reasonable-sounding that holds out hope for it. And this one (compound property space policing) has many other tempting advantages - it's cheap to implement, easy to measure, and produces piles of numbers that make for data-rich presentations. Even the managers who don't really know much chemistry can grasp the ideas behind it. How can it not be a good thing?

Especially when the alternative is so, so. . .empirical. So case-by-case. So disappointingly back-to-where-we-started. I mean, getting up in front of the higher-ups and telling them that no, we're not doing ourselves much good by whacking people about aromatic ring counts and nitrogen atom counts and PSA counts, etc., that we're just going to have to take the compounds forward and wait and see like we always have. . .that doesn't sound like much fun, does it? This isn't what anyone is wanting to hear. You're going to do a lot better if you can tell people that you've Identified The Problem, and How to Address It, and that this strategy is being implemented right now, and here are the numbers to prove it. Saying, in effect, that we can't do anything about it runs the risk of finding yourself replaced by someone who will say that we can.

But all that said, I really am losing faith in property-space metrics as a way to address toxicology. The only thing I'm holding on to are some of the structure-based criteria. I really do, for example, think that quinones are bad news. I think if you advance a hundred quinones into the clinic, that a far higher percentage of them will fail due to tox and side effects than a hundred broadly similar non-quinones. Same goes for rhodanines, and a few other classes, those "aces in the PAINS deck" I referred to the other day. I'm still less doctrinaire about functional groups than I used to be, but I still have a few that I balk at.

And yes, I know that there are drugs with all these groups in them. But if you look at the quinones, for example, you find mostly cytotoxics and anti-infectives which are cytotoxins with some selectivity for non-mammalian cells. If you're aiming at a particularly nasty target (resistant malaria, pancreatic cancer), go ahead and pull out all the stops. But I don't think anyone should cheerfully plow ahead with such structures unless there are such mitigating circumstances, or at least not without realizing the risks that they're taking on.

But this doesn't do us much good, either - most medicinal chemists don't want to advance such compounds anyway. In fact, rather than being too permissive about things like quinones, most of us are probably too conservative about the sorts of structures we're willing to deal with. There are a lot of funny-looking drugs out there, as it never hurts to remind oneself. Peeling off the outer fringe of these (and quinones are indeed the outer fringe) isn't going to increase anyone's success rate much. So what to do?

I don't have a good answer for that one. I wish I did. It's a rare case when we can say, just by looking at its structure, that a particular compound just won't work. I've been hoping that the percentages would allow us to say more than that about more compounds. But I'm really not sure that they do, at least not to the extent that we need them to, and I worry that we're kidding ourselves when we pretend otherwise.

Comments (21) + TrackBacks (0) | Category: Drug Assays | Drug Development | In Silico

October 17, 2014

More on "Metabolite Likeness" as a Predictor

Email This Entry

Posted by Derek

A recent computational paper that suggested that similarity to known metabolites could help predict successful drug candidates brought in a lot of comments around here. Now the folks at Cambridge MedChem Consulting have another look at it here.

The big concern (as was expressed by some commenters here as well) is the Tanimoto similarity cutoff of 0.5. Does that make everything look too similar, or not? CMC has some numbers across different data sets, and suggests that this cutoff is, in fact, too permissive to allow for much discrimination. People with access to good comparison sets of compounds that made it and compounds that didn't - basically, computational chemists inside large industrial drug discovery organizations - will have a better chance to see how all this holds up.

Comments (5) + TrackBacks (0) | Category: Drug Development | In Silico

Different Screening, Different Thermodynamics?

Email This Entry

Posted by Derek

Chris Lipinski and the folks at Collaborative Drug Discovery send word of an interesting webinar that will take place this coming Wednesday (October 22nd) at 2 PM EST. It's on enthalpic and entropic trends in ligand binding, and how various screening and discovery techniques might bias these significantly.

Here's the registration page if you're interested. I'm curious about what they've turned up - my understanding is that it will explore, among other things, the differences in molecules selected by industry-trained medicinal chemists versus the sorts that are reported by more academic chemical biologists. As has come up here several times in the past, there certainly do seem to be some splits there, and the CDD people seem to have some numbers to back up those impressions.

Comments (5) + TrackBacks (0) | Category: Academia (vs. Industry) | Chemical Biology | Drug Assays

October 16, 2014

The Electromagnetic Field Stem Cell Authors Respond

Email This Entry

Posted by Derek

The authors of the ACS Nano paper on using electromagnetic fields to produce stem cells have responded on PubPeer. They have a good deal to say on the issues around the images in their paper (see the link), and I don't think that argument is over yet. But here's what they have on criticisms of their paper in general:

Nowhere in our manuscript do we claim “iPSCs can be made using magnetic fields”. This would be highly suspect indeed. Rather, we demonstrate that in the context of highly reproducible and well-established reprogramming to pluripotency with the Yamanaka factors (Oct4, Sox2, Klf4, and cMyc/or Oct4 alone), EMF influences the efficiency of this process. Such a result is, to us, not surprising given that EMF has long been noted to have effects on biological system(Adey 1993, Del Vecchio et al. 2009, Juutilainen 2005)(There are a thousand of papers for biological effects of EMF on Pubmed) and given that numerous other environmental parameters are well-known to influence reprogramming by the Yamanaka factors, including Oxygen tension (Yoshida et al. 2009), the presence of Vitamin C (Esteban et al. 2010), among countless other examples.

For individuals such as Brookes and Lowe to immediately discount the validity of the findings without actually attempting to reproduce the central experimental finding is not only non-scientific, but borders on slanderous. We suggest that these individuals take their skepticism to the laboratory bench so that something productive can result from the time they invest prior to their criticizing the work of others.

That "borders on slanderous" part does not do the authors any favors, because it's a rather silly position to take. When you publish a paper, you have opened the floor to critical responses. I'm a medicinal chemist - no one is going to want to let me into their stem cell lab, and I don't blame them. But I'm also familiar with the scientific literature enough to wonder what a paper on this subject is doing in ACS Nano and whether its results are valid. I note that the paper itself states that ". . .this physical energy can affect cell fate changes and is essential for reprogramming to pluripotency."

If it makes the authors feel better, I'll rephrase: their paper claims that iPSCs can be made more efficiently by adding electromagnetic fields to the standard transforming-factor mixture. (And they also claim that canceling out the Earth's magnetic field greatly slows this process down). These are very interesting and surprising results, and my first impulse is to wonder if they're valid. That's my first impulse every time I read something interesting and surprising, by the way, so the authors shouldn't take this personally.

There are indeed many papers in PubMed on the effects of electromagnetic fields on cellular processes. But this area has also been very controversial, and (as an outside observer) my strong impression is that there have been many problems with irreproducibility. I have no doubt that people with expertise in stem cell biology will be taking a look at this report and trying to reproduce it as well, and I am eager to see what happens next.

Comments (25) + TrackBacks (0) | Category: Biological News | The Scientific Literature

What's The Going Rate These Days?

Email This Entry

Posted by Derek

Time to break out the pseudonyms for the comments section. I've had a couple of people asking (on both sides of the process) what the starting salaries for medicinal chemists are running in the Boston/Cambridge area. It's been a while since this was much of a topic, sad to say, but there is some hiring going on these days, and people are trying to get a feel for what the going rates are. Companies want to make sure that they're making competitive-but-not-too-generous offers, and applicants want to make sure that they're getting a reasonable one, too, naturally.

So anyone with actual data is invited to leave it in the comments section, under whatever name you like. Reports from outside the Boston/Cambridge area (and at other experience levels) are certainly welcome, too, because the same issues apply in other places as well.

Comments (115) + TrackBacks (0) | Category: Business and Markets | How To Get a Pharma Job

No More Varian

Email This Entry

Posted by Derek

This week has brought news that Agilent is getting out of the NMR business, which brings an end to the Varian line of machines, one of the oldest in the business. (Agilent bought Varian in 2010). The first NMR I ever used was a Varian EM-360, which was the workhorse teaching instrument back then. A full 60 MHz of continuous wave for your resolving pleasure - Fourier transform? Superconducting magnets? Luxury! Why, we used to dream of. . .

I used many others in the years to come. But over time, the number of players in the NMR hardware market has contracted. You used to be able to walk into a good-sized NMR room and see machines from Varian, Bruker, JEOL, Oxford, GE (edit - added them) and once in a while an oddity like the 80-MHz IBM-brand machine that I used to use at Duke thirty years ago. No more - Bruker is now the major player. Their machines are good ones (and they've been in the business a while, too), but I do wish that they had some competition to keep them on their toes.

How come there isn't any? It's not that NMR spectroscopy is a dying art. It's as useful as ever, if not even more so. But I think that the market for equipment is pretty saturated. Every big company and university has plenty of capacity, and will buy a new machine only once in a while. The smaller companies are usually fixed pretty well, too, thanks to the used equipment market. And most of those colleges that used to have something less than a standard 300 MHz magnet have worked their way up to one.

There's not much room for a new company to come in and say that their high-field magnets are so much better than the existing ones, either, because the hardware has also reached something of a plateau. You can go out and buy a 700 MHz instrument (and Bruker no doubt wishes that you would), and that's enough to do pretty much any NMR experiment that you can think of. 1000 MHz instruments exist, but I'm not sure how many times you run into a situation where one of those would do the job for you, but a 700 wouldn't. I'm pretty sure that no one even knows how to build a 2000 MHz NMR, but if they did, the number sold would probably be countable on the fingers of one hand. Someone would have to invent a great reason for such a machine to exist -this isn't supercomputing, where the known applications can soak up all the power you can throw at them.

So farewell to the line of Varian NMR machines. Generations of chemists have used their equipment, but Bruker is the one left standing.

Comments (41) + TrackBacks (0) | Category: Analytical Chemistry

October 15, 2014

Not The Sort Of Thing You'd Work With, Given a Choice

Email This Entry

Posted by Derek

Here's a paper that illustrates a different way of looking at the world than many medicinal chemists would have. It discusses inhibitors of SETD8, an unusual epigenetic enzyme that is the only known methyltransferase to target lysine 20 of histone H4. Inhibitors of it would help to unravel just what functions that has, presumably several that no other pathway is quite handling. But finding decent methyltransferase inhibitors has not been easy.
Quinones.jpg
When you search for them, actually, you find compounds like the ones in this paper. Most medicinal chemists will look at these, say the word "quinone", perhaps take a moment spit on the floor or add a rude adjective, and move on to see if there's anything better to look at. Quinones are that unpopular, and with good reason. They're redox-active, can pick up nucleophiles as Michael acceptors, react with amines - they have a whole suite of unattractive behaviors. And that explains their profile in cells and whole animals, with a range of toxic, carcinogenic, and immunologic liabilities. A lot of very active natural products have a quinone in them - it's a real warhead. No medicinal chemist with any experience would feel good about trying to advance one as a lead compound, and (for the same reasons) they tend to make poor tool compounds as well. You just don't know what else they're hitting, and the chance of them hitting something else are too high.

The authors of this paper, though, have a higher tolerance:

In the present work, we characterize these compounds and demonstrate that NSC663284, BVT948, and ryuvidine (3 out of the 4 HTS hits) inhibit SETD8 via different modes. NSC663284 (SPS8I1), ryuvidine (SPS8I2), and BVT948 (SPS8I3) efficiently and selectively suppress cellular H4K20me1 at doses lower than 5 μM within 24 h. . . The cells treated with SPS8I1−3 (Small-molecule Pool of SETD8 Inhibitor) recapitulate cell-cycle-arrest phenotypes similar to what were reported for knocking down SETD8 by RNAi. Given that the three compounds have distinct structures and inhibit SETD8 in different manners, they can be employed collectively as chemical genetic tools to interrogate SETD8-involved methylation.

I would be very careful about doing that, myself. I don't find those structures as distinct as all that (quinone, quinone, quinone), and I'm not surprised to find that they arrest the cell cycle. But do they do it via SETD8? To be fair, they do show selectivity over the other enzymes used in the screening panel (SETD7, SETD2, and GLP). They went on to profile them against several lysine methyltransferases and several arginine methyltransferases. The most selective of the bunch was 2.5x more active against SETD8 compared to the next most active target, which is honestly not a whole lot. (And I note that the authors spent some time a few paragraphs before talking about how their activity measurements are necessarily uncertain).

They do address the quinone problem, but in a somewhat otherworldly manner:

Given that SPS8I1−3 are structurally distinct except for their quinonic moiety (Figure 1a, highlighted in red), we reasoned that they may act on SETD8 differently (e.g., dependence on cofactor or substrate). . .

This, to many medicinal chemists, is a bit like saying that several species of poisonous snake are distinct except for their venom-filled fangs. The paper does seem to find differences in how the three inhibitors respond to varying substrate concentrations, but they also find (unsurprisingly) that all three work by covalent inhibition. Studies on mutant forms of the enzyme suggest strongly that two of the compounds are hitting a particular Cys residue (270), while the third "may target Cys residues in a more general manner". To their credit, they did try three quinone-containing compounds from commercial sources and found them inactive, but that just shows that not every quinone inhibits their enzyme.

This, too, is what you'd expect: if you did a full proteome analysis of what a given quinone compound hits, I'm sure that you'd find varying fingerprints for each one. But even though I have no objection to covalent inhibitors per se, I'm nervous about ones that have so many potential mechanisms. The size and shape of the three compounds shown will surely keep them from doing all the damage that a smaller quinone is capable of doing, but I fear that there's still plenty of damage in them.

Indeed, when they do cell assays, they find that each of the compounds has a somewhat different profile of cell cycle arrest, and say that this is probably due to their off-target effects. But they go on to wind things up like this:

Structurally distinct SPS8I1−3 also display different modes of SETD8 inhibition. Such differences also make SPS8I1−3 less likely to act on other common cellular targets besides SETD8. As a result, the shared phenotypes of the 3 compounds are expected to be associated with SETD8 inhibition. . .Such robust inhibition of SETD8 by SPS8I1−3, together with their different off-target effects, argues that these compounds can be used collectively as SETD8 inhibitors to offset off-target effects of individual reagents. At this stage, we envision using all three compounds to examine SETD8 inhibition and then focusing on the phenotypes shared by all of them.

I have to disagree there. I would be quite worried about how many other cellular processes are being disrupted by these compounds. In fact, the authors already point to some of these. Their SPS8I1, they note, has already been reported as a CDC25 inhibitor. SPS8I2 has been shown to be a CDK2/4 inhibitor, and SPS8I3 has been reported as an inhibitor of a whole list of protein tyrosine phosphatases. None of these enzymes, I would guess, has any particular great structural homology with SETD8, and those activities are surely only the beginning. How is all this to be untangled? Using all three of them to study the same system is likely to just confuse things more rather than throwing light on common mechanisms. Consider the background: even a wonderful, perfectly selective SETD8 inhibitor would be expected to induce a complex set of phenotypes, varying on the cell type and the conditions.

And these are not wonderful inhibitors. They are quinones, aces in the deck of PAINS. No matter what, they need a great deal more characterization before any conclusions can be drawn from their activity. A charitable view of them would be that such characterization, along with a good deal of chemistry effort, might result in derivatives that have a decent chance of hitting SETD8 in a useful manner. An uncharitable view would be that they should be poured into the red waste can before they use up any more time and money.

Comments (42) + TrackBacks (0) | Category: Drug Assays

October 14, 2014

Combichem Into Drugs: How Many?

Email This Entry

Posted by Derek

So here's a question I got from a reader the other day, that I thought I'd put up on the site. How many drugs have there been whose origins were in combichem? I realize that this could be tricky to answer, because compound origins are sometimes forgotten or mysterious. But did the combichem boom of the 1990s produce any individual compound success stories?

Comments (49) + TrackBacks (0) | Category: Drug Assays | Drug Industry History

Electromagnetic Production of Stem Cells? Really?

Email This Entry

Posted by Derek

Now this is an odd paper: its subject matter is unusual, where it's published is unusual, and it's also unusual that no one seems to have noticed it. I hadn't, either. A reader sent it along to me: "Electromagnetic Fields Mediate Efficient Cell Reprogramming into a Pluripotent State".

Yep, this paper says that stem cells can be produced from ordinary somatic cells by exposure to electromagnetic fields. Everyone will recall the furor that attended the reports that cells could be reprogrammed by exposure to weak acid baths (and the eventual tragic collapse of the whole business). So why isn't there more noise around this publication?

One answer might be that not many people who care about stem cell biology read ACS Nano, and there's probably something to that. But that immediately makes you wonder why the paper is appearing there to start with, because it's also hard to see how it relates to nanotechnology per se. An uncharitable guess would be that the manuscript made the rounds of several higher profile and/or more appropriate journals, and finally ended up where it is (I have no evidence for this, naturally, but I wouldn't be surprised to hear that this was the case).

So what does the paper itself have to say? It claims that "extremely low frequency electromagnetic fields" can cause somatic cells to transform into pluripotent cells, and that this process is mediated by EMF effects on a particular enzyme, the histone methyltransferase MII2. That's an H3K4 methyltransferase, and it has been found to be potentially important in germline stem cells and spermatogenesis. Otherwise, I haven't seen anyone suggesting it as a master regulator of stem cell generation, but then, there's a lot that we don't know about epigenetics and stem cells.

There is, however, a lot that we do know about electromagnetism. Over the years, there have been uncountable reports of biological activity for electromagnetic fields. You can go back to the controversy over the effects of power lines in residential areas and the later disputes about the effects of cell phones, just to pick two that have had vast amounts of coverage. The problem is, no one seems to have been able to demonstrate anything definite in any of these cases. As far as I know, studies have either shown no real effects, or (when something has turned up), no one's been able to reproduce it. That goes both for laboratory studies and for attempts at observational or epidemiological studies, too: nothing definite, over and over.

There's probably a reason for that. I have trouble with is the mechanism by which an enzyme gets induced by low-frequency electromagnetic fields, and that's always been the basic argument against such things. You almost have to assume new physics to make a strong connection, because nothing seems to fit: the energies involved are too weak, the absorptions don't match up, and so on. Or at least that's what I thought, but this paper has a whole string of references about how extremely low-frequency electromagnetic fields do all sorts of things to all sorts of cell types. But it's worth noting that the authors also reference papers showing that they're linked to cancer epidemiology, too. It's true, though, that if you do a Pubmed search for "low frequency electromagnetic field" you get a vast pile of references, although I'm really not sure about some of them.

The authors say that the maximum effect in their study was seen at 50 Hz, 1 mT. That is indeed really, really low frequency - the wavelength for a radio signal down there is about 6000 kilometers. Just getting antennas to work in that range is a major challenge, and it's hard for me to picture how subcellular structures could respond to these wavelengths at all. There seem to be all sorts of theories in the literature about how enzyme-level and transcription-level effects might be achieved, but no consensus (from what I can see). Most of the mechanistic discussions I've seen avoid the question entirely - they talk about what enzyme system or signaling pathway might be the "mechanism" for the reported effects, but skip over the big question of how these effects might arise in the first place.

An even odder effect reported in this paper is that the authors also tried these in an experimental setup (a Helmholz coil) that canceled out the usual environment of the Earth's magnetic field. They found that this worked much less efficiently, and suggest that the natural magnetic field must have epigenetic effects. I don't know what to make of that one, either. Normal cells grown under these conditions showed no effects, so the paper hypothesizes that some part of the pluripotency reprogramming process is exceptionally sensitive. Here, I'll let the authors summarize:

As one of the fundamental forces of nature, the EMF is a physical energy produced by electrically charged objects that can affect the movement of other charged objects in the field. Here we show that this physical energy can affect cell fate changes and is essential for reprogramming to pluripotency. Exposure of cell cultures to EMFs significantly improves reprogramming efficiency in somatic cells. Interestingly, EL-EMF exposure combined with only one Yamanaka factor, Oct4, can generate iPSCs, demonstrating that EL-EMF expo- sure can replace Sox2, Klf4, and c-Myc during reprogramming. These results open a new possibility for a novel method for efficient generation of iPSCs. Although many chemical factors or additional genes have been reported for the generation of iPSCs, limitations such as integration of foreign genetic elements or efficiency remain a challenge. Thus, EMF-induced cell fate changes may eventually provide a solution for efficient, noninvasive cell reprogramming strategies in regenerative medicine.

Interestingly, our results show that ES cells and fibroblasts themselves are not significantly affected by EMF exposure; rather, cells undergoing dramatic epigenetic changes such as reprogramming seem to be uniquely susceptible to the effects of EMFs. . .

I don't know what to make of this paper, or the whole field of research. Does anyone?

Update: PubPeer is now reporting some problems with images in the paper. Stay, uh, tuned. . .

Comments (36) + TrackBacks (0) | Category: Biological News

October 13, 2014

Alzheimer's in Cell Culture?

Email This Entry

Posted by Derek

While we're talking about cell culture, there's some potentially significant news in Alzheimer's. The Tanzi lab at Mass General is reporting in Nature that they've been able to grow 3D neuronal cultures that actually reproduce the plaque-and-tangle symptoms of Alzheimer's. That's quite a surprise - neurons are notoriously badly behaved in vitro, and Alzheimer's has been a beast to model in any system at all. You can't even get neurons from human Alzheimer's patients to behave like that when you culture them (at least, I've never heard of it being done).

These new cultures apparently respond to secretase inhibitors, which on one level is good news - since you'd expect those compounds to have an effect on them. On the other hand, such compounds have been quite ineffective in human trials, so there's a disconnect here. Is there more to Alzheimer's that these cell cultures don't pick up, or are the compounds much less better-behaved in vivo (or both)?

This new system, if validated, would seem to open up a whole new avenue for phenotypic screening, which until now has been a lost cause where Alzheimer's is concerned. It's going to be quite interesting to see how this develops, and to see what it can teach us about the real disease. Nothing in this area has come easy, and a break would be welcome. The tricky part will be whether compounds that come out of such a screen will be telling us something about Alzheimer's, or just telling us something about the model. That's always the tricky part.

Update: FierceBiotech notes that Tanzi's "previous insights about Alzheimer's have run into some serious setbacks."

Comments (29) + TrackBacks (0) | Category: Alzheimer's Disease

Diabetes Progress

Email This Entry

Posted by Derek

There have recently been some welcome developments in diabetes therapy, both Type I and Type II. For the latter, there's an interesting report of a metabolic uncoupling therapy in Nature Medicine. Weirdly, it uses a known tapeworm medication, niclosamide (specifically, the ethanolamine salt). It's toxic to worms by that same mechanism. If you uncouple oxidative phosphorylation and the electron-transport system in the mitochondria, you end up just chewing up lipids through respiration while not generating any ATP. That's what happens in brown fat (through the action of uncoupling proteins), and that's what used in mammals for generating extra body heat. Many schemes for cranking this up in humans have been looked at over the years, but a full-scale mitochondrial uncoupling drug would be a nasty proposition in humans (see, for example, dinitrophenol). DNP will indeed make you lose weight, while at the same time you ravenously try to eat your daily supply of ATP, but this is done at a significant risk of sudden death. (And anything that does a better job than DNP will just skip straight to the "sudden death" part). But niclosamide seems to be less efficacious, which in this case is a good thing.

This mechanism diminishes the fat content in liver and muscle tissue, which should improve insulin sensitivity and glucose uptake, and seems to do so very well in mouse models. The authors (Shengkan Jin and colleagues at Rutgers) have formed a company to try to take something in this area into humans. I wish them luck with that - this really could be a good thing for type II and metabolic-syndrome patients, but the idea has proven very difficult over the years. The tox profile is going to be key, naturally, and taking it into the clinic is really the only way to find out if it'll be acceptable.

The Type I news is even more dramatic: a group at Harvard (led by Doug Melton) report in Cellthat they've been able to produce large quantities of glucose-sensitive beta-cells from stem cell precursors. People have been working towards this goal for years, and it hasn't been easy (you can get cells that secrete insulin, but don't sense glucose, for example, but you really don't want that in your body). Transplantation of these new cells into diabetic mice seem to roll back the disease state, so this is another one to try in humans. The tricky part is the keep the immune system from rejecting them (the problem with cell transplants for diabetes in general), but they've managed to protect them in the mouse models, and there's a lot of work going into this part of the idea as well for human trials. This could be very promising indeed, and could, if things go right, be a flat-out cure for many Type I patients. Now that would be an advance.

Comments (12) + TrackBacks (0) | Category: Diabetes and Obesity

October 10, 2014

More on Fluorescent Microscopy Chemistry Prizes

Email This Entry

Posted by Derek

I wanted to note (with surprise!) that one of this year's Nobel laureates actually showed up in the comments section of the post I wrote about him. You'd think his schedule would be busier at the moment (!), but here's what he had to say:

A friend pointed this site/thread out to me. I apologize if I was unclear in the interview. #3 and #32 have it right -- I have too much respect for you guys, and don't deserve to be considered a chemist. My field is entirely dependent upon your good works, and I suspect I'll be personally more dependent upon your work as I age.

Cheers, Eric Betzig

And it's for sure that most of the readers around here are not physicists nor optical engineers, too! I think science is too important for food fights about whose part of it is where - we're all working on Francis Bacon's program of "the effecting of all things possible", and there's plenty for everyone to do. Thanks very much to Betzig for taking the time to leave the clarification.

rhodamine.jpg
Bacterial%20probe.jpg
With that in mind, I was looking this morning at the various tabs I have open on my browser for blogging subjects, and noticed that one of them (from a week or so back) was a paper on super-resolution fluorescent probes. And it's from one of the other chemistry Nobel winners this year, William Moerner at Stanford! Shown is the rhodamine structure that they're using, which can switch from a nonfluorescent state to a highly fluorescent one. Moerner and his collaborators at Kent State investigated a series of substituted variants of this scaffold, and found one that seems to be nontoxic, very capable of surface labeling of bacterial cells, and is photoswitchable at a convenient wavelength. (Many other photoswitchable probes need UV wavelengths to work, which bacteria understandably don't care for very much).

Shown below the structure drawing is an example of the resolution this probe can provide, using Moerner's double-helix point-spread-function, which despite its name is not an elaborate football betting scheme. That's a single cell of Caulobacter crescentus, and you can see that the dye is almost entirely localized on the cell surface, and that ridiculously high resolutions can be obtained. Being able to resolve features inside and around bacterial cells is going to be very interesting in antibiotic development, and this is the kind of work that's making it possible.

Oh, and just a note: this is a JACS paper. A chemistry Nobel laureate's most recent paper shows up in a chemistry journal - that should make people happy!

Comments (8) + TrackBacks (0) | Category: General Scientific News

You'd Think That This Can't Be Correct

Email This Entry

Posted by Derek

Well, here's something to think about over the weekend. I last wrote here in 2011 about the "E-cat", a supposed alternative energy source being touted/developed by Italian inventor Andrea Rossi. Odd and not all that plausible claims of low-energy fusion reactions of nickel isotopes have been made for the device (see the comments section to that post above for more on this), and the whole thing definitely has been staying in my "Probably not real" file. Just to add one complication, Rossi's own past does not appear to be above reproach. And his conduct (and that of his coworker Sergio Focardi) would seem to be a bit strange during this whole affair.

But today there is a preprint (PDF) of another outside-opinion test of the device (thanks to Alex Tabarrok of Marginal Revolution on Twitter for the heads-up). It has several Swedish co-authors (three from Uppsala and one from the Royal Institute of Technology in Stockholm), and the language is mostly pretty measured. But what it has to say is quite unusual - if it's true.

The device itself is no longer surrounded by lead shielding, for one thing. No radiation of any kind appears to be emitted. The test went on for 32 days of continuous operation, and here's the take-home:

The quantity of heat emitted constantly by the reactor and the length of time during which the reactor was operating rule out, beyond any reasonable doubt, a chemical reaction as underlying its operation. This is emphasized by the fact that we stand considerably more than two order of magnitudes from the region of the Ragone plot occupied by conventional energy sources.

The fuel generating the excessive heat was analyzed with several methods before and after the experimental run. It was found that the Lithium and Nickel content in the fuel had the natural isotopic composition before the run, but after the 32 days run the isotopic composition has changed dramatically both for Lithium and Nickel. Such a change can only take place via nuclear reactions. It is thus clear that nuclear reactions have taken place in the burning process. This is also what can be suspected from the excessive heat being generated in the process.

Although we have good knowledge of the composition of the fuel we presently lack detailed information on the internal components of the reactor, and of the methods by which the reaction is primed. Since we are presently not in possession of this information, we think that any attempt to explain the E-Cat heating process would be too much hampered by the lack of this information, and thus we refrain from such discussions.

In summary, the performance of the E-Cat reactor is remarkable. We have a device giving heat energy compatible with nuclear transformations, but it operates at low energy and gives neither nuclear radioactive waste nor emits radiation. From basic general knowledge in nuclear physics this should not be possible. . .

Told you it was interesting. But I'm waiting for more independent verification. As long as Rossi et al. are so secretive about this device, the smell of fraud will continue to cling to it. I truly am wondering just what's going on here, though.

Update: Elforsk, the R&D arm of Sweden's power utility, has said that they want to investigate this further. Several professors from Uppsala reply that the whole thing is likely a scam, and that Elforsk shouldn't be taken in. Thanks to reader HL in the comments section, who notes that Google Translate does pretty well with Swedish-English.

Comments (38) + TrackBacks (0) | Category: General Scientific News

Things I Won't Work With: Peroxide Peroxides

Email This Entry

Posted by Derek

Everyone knows hydrogen peroxide, HOOH. And if you know it, you also know that it's well-behaved in dilute solution, and progressively less so as it gets concentrated. The 30% solution will go to work immediately bleaching you out if you are so careless as to spill some on you, and the 70% solution, which I haven't seen in years, provides an occasion to break out the chain-mail gloves.

Chemists who've been around that one know that I'm not using a figure of speech - the lab down the hall from me that used to use the stuff had a pair of spiffy woven-metal gloves for just that purpose. Part of the purpose, I believe, was to make you think very carefully about what you were doing as you put them on. Concentrated peroxide has a long history in rocketry, going back to the deeply alarming Me-163 fighter of World War II. (Being a test pilot for that must have taken some mighty nerves). Me, I have limits. I've used 30% peroxide many times, and would pick up a container of 70%, if I were properly garbed (think Tony Stark). But I'm not working with the higher grades under any circumstances whatsoever.

The reason for this trickiness is the weakness of the oxygen-oxygen bond. Oxygen already has a lot of electron density on it; it's quite electronegative. So it would much rather be involved with something from the other end of the scale, or at least the middle, rather than make a single bond to another pile of electrons like itself. Even double-bonded oxygen, the form that we breath, is pretty reactive. And when those peroxides decompose, they turn into oxygen gas and fly off into entropic heaven, which is one of the same problems involved in having too many nitrogens in your molecule. There are a lot of things, unfortunately, that can lead to peroxide decomposition - all sorts of metal contaminants, light, spitting at them (most likely), and it doesn't take much. There are apparently hobbyists, though, who have taken the most concentrated peroxide available to them and distilled it to higher strengths. Given the impurities that might be present, and the friskiness of the stuff even when it's clean, this sounds like an extremely poor way to spend an afternoon, but there's no stopping some folks.

Any peroxide (O-O) bond is suspect, if you know what's good for you. Now, if it's part of a much larger molecule, then it's much less likely to go all ka-pow on you (thus the antimalarial drugs artemisinin) and arterolane, but honestly, I would still politely turn down an offer to bang on a bunch of pure artemisinin with a hammer. It just seems wrong.

But I have to admit, I'd never thought much about the next analog of hydrogen peroxide. Instead of having two oxygens in there, why not three: HOOOH? Indeed, why not? This is a general principle that can be extended to many other similar situations. Instead of being locked in a self-storage unit with two rabid wolverines, why not three? Instead of having two liters of pyridine poured down your trousers, why not three? And so on - it's a liberating thought. It's true that adding more oxygen-oxygen bonds to a compound will eventually liberate the tiles from your floor and your windows from their frames, but that comes with the territory.

These thoughts were prompted by a recent paper in JACS that describes a new route to "dihydrogen trioxide", which I suppose is a more systematic name than "hydrogen perperoxide", my own choice. Colloquially, I would imagine that the compound is known as "Oh, @#&!", substituted with the most heartfelt word available when you realize that you've actually made the stuff. The current paper has a nice elimination route to it via a platinum complex, one that might be used to make a number of other unlikely molecules (if it can make HOOOH in 20% yield, it'll make a lot of other things, too, you'd figure). It's instantly recognizable in the NMR, with a chemical shift of 13.4 for those barely-attached-to-earth hydrogens.

But this route is actually pretty sane: it can be done on a small scale, in the cold, and the authors report no safety problems at all. And in general, most people working with these intermediates have been careful to keep things cold and dilute. Dihydrogen trioxide was first characterized in 1993 (rather late for such a simple molecule), but there had been some evidence for it in the 1960s (and it had been proposed in some reactions as far back as the 1880s). Here's a recent review of work on it. Needless to say, no one has ever been so foolhardy as to try to purify it to any sort of high concentration. I'm not sure how you'd do that, but I'm very sure that it's a bad, bad, idea. This stuff is going to be much jumpier than plain old hydrogen peroxide (that oxygen in the middle of the molecule probably doesn't know what to do with itself), and I don't know how far you could get before everything goes through the ceiling.

But there are wilder poly-peroxides out there. If you want to really oxidize the crap out of things with this compound, you will turn to the "peroxone process". This is a combination of ozone and hydrogen peroxide, for those times when a single explosive oxidizing agent just won't do. I'm already on record as not wanting to isolate any ozone products, so as you can imagine, I really don't want to mess around with that and hydrogen peroxide at the same time. This brew generates substantial amounts of HOOOH, ozonide radicals, hydroxy radicals and all kinds of other hideous thingies, and the current thinking is that one of the intermediates is the HOOOOO- anion. Yep, five oxygens in a row - I did not type that with my elbows. You'll want the peroxone process if you're treated highly contaminated waste water or the like: here's a look at using it for industrial remediation. One of the problems they had was that as they pumped ozone and peroxide into the contaminated site, the ozone kept seeping back up into the equipment trailer and setting off alarms as if the system were suddenly leaking, which must have been a lot of fun.

What I haven't seen anyone try is using this brew in organic synthesis. It's probably going to be a bit. . .uncontrolled, and lead to some peroxide products that will also have strong ideas of their own. But if you keep things dilute, you should be able to make it through. Anyone ever seen it used for a transformation?

Comments (52) + TrackBacks (0) | Category: Things I Won't Work With

October 9, 2014

The Most Common Heterocycles in Drugs

Email This Entry

Posted by Derek

What sorts of heterocycles show up the most in approved drugs? This question has been asked several times before in the literature, but it's always nice to see an update. This one is from the Njardson group at Arizona, producers of the "Top 200 Drugs" posters.

84% of all unique small-molecule drugs approved by the FDA have at least one nitrogen atom in them, and 59% have some sort of nitrogen heterocycle. Leaving out the cephems and penems, which are sort of a special case and not really general-purpose structures, the most popular ones are piperidine, pyridine, pyrrolidine, thiazole, imidazole, indole, and tetrazole, in that order. Some other interesting bits:

All the four-membered nitrogen heterocycles are beta-lactams; no azetidine-containing structure has yet made it to approval.

The thiazoles rank so highly because so many of them are in the beta-lactam antibiotics as well. Every single approved thiazole is substituted in the 2 position, and no monosubstituted thiazole has ever made it into the pharmacopeia, either.

Almost all the indole-containing drugs are substituted at C3 and/or C5 - pindolol is an outlier.

The tetrazoles are all either antibiotics or cardiovascular drugs (the sartans).

92% of all pyrrolidine-substructure compounds have a substituent on the nitrogen.

Morpholine looks more appealing as a heterocycle than it really is - piperidine and piperazine both are found far more frequently. And I'll bet that many of those morpholines are just there for solubility, and that otherwise a piperidine would have served for SAR purposes. Ethers don't always seem to do that much for you.

Piperidines rule. There's a huge variety of them out there, the great majority substituted on the nitrogen. Azepanes, though, one methylene larger, have only three representatives.

83% of piperazine-containing drugs are substituted at both nitrogens.

There are a lot of other interesting bits in the paper, which goes on to examine fused and bicyclic heterocycles. But I think this afternoon I'll go make some piperidines and increase my chances.

Comments (25) + TrackBacks (0) | Category: Chemical News | Drug Industry History

Eric Betzig Is Not a Chemist, And I Don't Much Care

Email This Entry

Posted by Derek

Update: Betzig himself has shown up in the comments to this post, which just makes my day.

Yesterday's Nobel in chemistry set off the traditional "But it's not chemistry!" arguments, which I largely try to stay out of. For one thing, I don't think that the borders between the sciences are too clear - you can certainly distinguish the home territories of each, but not the stuff out on the edge. And I'm also not that worked up about it, partly because it's nowhere near a new phenomenon. Ernest Rutherford got his Nobel in chemistry, and he was an experimental physicist's experimental physicist. I'm just glad that a lot of cutting-edge work in a lot of important fields (nanotechnology, energy, medicine, materials science) has to have a lot of chemistry in it.

With this in mind, I thought this telephone interview with Eric Betzig, one of the three laureates in yesterday's award, was quite interesting:

This is a chemistry prize, do you consider yourself a chemist, a physicist, what?

[EB] Ha! I already said to my son, you know, chemistry, I know no chemistry. [Laughs] Chemistry was always my weakest subject in high school and college. I mean, you know, it's ironic in a way because, you know, trained as a physicist, when I was a young man I would look down on chemists. And then as I started to get into super-resolution and, which is really all about the probes, I came to realise that it was my karma because instead I was on my knees begging the chemists to come up with better probes for me all the time. So, it's just poetic justice but I'm happy to get it wherever it is. But I would be embarrassed to call myself a chemist.

Some people are going to be upset by that, but you know, if you do good enough work to be recognized with a Nobel, it doesn't really matter much what it says on the top of the page. "OK, that's fine for the recipients", comes one answer, "but what about the committee? Shouldn't the chemistry prize recognize people who call themselves chemists?" One way to think about that is that it's not the Nobel Chemist prize, earmarked for whatever chemists have done the best work that can be recognized. (The baseball Hall of Fame, similarly, has no requirement that one-ninth of its members be shortstops). It's for chemistry, the subject, and chemistry can be pretty broadly defined. "But not that broadly!" is the usual cry.

That always worries me. It seems dangerous, in a way - "Oh no, we're not such a broad science as that. We're much smaller - none of those big discoveries have anything to do with us. Won't the Nobel committee come over to our little slice of science and recognize someone who's right in the middle of it, for once?" The usual reply to that is that there are, too, worthy discoveries that are pure chemistry, and they're getting crowded out by all this biology and physics. But the pattern of awards suggests that a crowd of intelligent, knowledgable, careful observers can disagree with that. I think that the science Nobels should be taken as a whole, and that there's almost always going to be some blending and crossover. It's true that this year's physics and chemistry awards could have been reversed, and no one would have complained (or at least, not any more than people are complaining now). But that's a feature, not a bug.

Comments (38) + TrackBacks (0) | Category: Chemical News | General Scientific News

October 8, 2014

XKCD on Protein Folding

Email This Entry

Posted by Derek

I've been meaning to mention this recent XKCD comic, which is right on target:
"Someone may someday find a harder one", indeed. . .

Protein folding

Comments (25) + TrackBacks (0) | Category: Biological News

The 2014 Chemistry Nobel: Beating the Diffraction Limit

Email This Entry

Posted by Derek

This year's Nobel prize in Chemistry goes to Eric Betzig, Stefan Hell, and William Moerner for super-resolution fluorescence microscopy. This was on the list of possible prizes, and has been for several years now (see this comment, which got 2 out of the 3 winners, to my 2009 Nobel predictions post). And it's a worthy prize, since it provides a technique that (1) is useful across a wide variety of fields, from cell biology on through chemistry and into physics, and (2) does so by what many people would, at one time, would have said was impossible.

The impossible part is beating the diffraction limit. That was first worked out by Abbe in 1873, and it set what looked like a physically impassable limit to the resolution of optical microscopy. Half the wavelength of the light you're using is as far as you can go, and (unfortunately) that means that you can't optically resolve viruses, many structures inside the cell, and especially nothing as small as a protein molecule. (As an amateur astronomer, I can tell you that the same limits naturally apply to telescope optics, too: even under perfect conditions, there's a limit to how much you can resolve at a given wavelength, which is why even the Hubble telescope can't show you Neil Armstrong's footprint on the moon). In any optical system, you're doing very well if the diffraction limit is the last thing holding you back, but hold you back it will.
STED.jpg
There are several ways to try to sneak around this problem, but the techniques that won this morning are particularly good ones. Stefan Hell worked out an ingenious method called stimulated emission depletion (STED) microscopy. If you have some sort of fluorescent label on a small region of a sample, you get it to glow, as usual, by shining a particular wavelength of light on it. The key for STED is that if another particular wavelength of light is used at the same time, you can cause the resulting fluorescence to shift. Physically, fluorescence results when electrons get excited by light, and then relax back to where they were by emitting a different (longer) wavelength. If you stimulate those electrons by catching them once they're already excited by the first light, they fall back into a higher vibrational state than they would otherwise, which means less of an energy gap, which means less energetic light is emitted - it's red-shifted compared to the usual fluorescence. Pour enough of that second stimulating light into the system after the first excitation, and you can totally wipe out the normal fluorescence.

And that's what STED does. It uses the narrowest possible dot of "normal" excitation in the middle, and surround that with a doughnut shape of the second suppressing light. Scanning this bulls-eye across the sample gives you better-than-diffraction-limit imaging for your fluorescent label. Hell's initial work took several years just to realize the first images, but the microscopists have jumped on the idea over the last fifteen years or so, and it's widely used, with many variations (multiple wavelength systems at the same time, high frames-per-second rigs for recording video, and so on). There's a STED image of a labeled neurofilament compared to the previous state of the art. You'd think that this would be an obvious and stunning breakthrough that would speak for itself, but Hell himself is glad to point out that his original paper was rejected by both Nature and Science.
STED%20image.jpg
You can, in principle, make the excitation spot as small as you wish (more on this in the Nobel Foundation's scientific background on the prize here). In practice, the intensity of the light needed as you push to higher and higher resolution tends to lead to photobleaching of the fluorescent tags and to damage in the sample itself, but getting around these limits is also an active field of research. As it stands, STED already provides excellent and extremely useful images of all sorts of samples - many of those impressive fluorescence microscopy shots of glowing cells are produced this way.

The other two winners of the prize worked on a different, but related technique: single-molecule microscopy. Back in 1989, Moerner's lab was the first to be able to spectroscopically distinguish single molecules outside the gas phase - pentacene, imbedded in crystals of another aromatic hydrocarbon (terphenyl), down around liquid helium temperatures. Over the next few years, a variety of other groups reported single-molecule studies in all sorts of media, which meant that something that would have been thought crazy or impossible when someone like me was in college was now popping up all over the literature.

But as the Nobel background material rightly states, there are some real difficulties with doing single-molecule spectroscopy and trying to get imaging resolution out of it. The data you get from a single fluorescent molecule is smeared out in a Gaussian (or pretty much Gaussian) blob, but you can (in theory) work back from that to where the single point must have been to give you that data. But to do that, the fluorescent molecules have to scattered apart further than that diffraction limit. Fine, you can do that - but that's too far apart to reconstruct a useful image (Shannon and Nyquist's sampling theorem in information theory sets that limit).

Betzig himself took a pretty unusual route to his discovery that gets around this problem. He'd been a pioneer in another high-resolution imaging technique, near-field microscopy, but that one was such an impractical beast to realize that it drove him out of the field for a while. (Plenty of work continues in that area, though, and perhaps it'll eventually spin out a Nobel of its own). As this C&E News article from 2006 mentions, he. . .took some time off:

After a several-year stint in Michigan working for his father's machine tool business, Betzig started getting itchy again a few years ago to make a mark in super-resolution microscopy. The trick, he says, was to find a way to get only those molecules of interest within a minuscule field of view to send out enough photons in such a way that would enable an observer to precisely locate the molecules. He also hoped to figure out how to watch those molecules behave and interact with other proteins. After all, says Betzig, "protein interactions are what make life."

Betzig, who at the time was a scientist without a research home, knew also that interactions with other researchers almost always are what it takes these days to make significant scientific or technological contributions. Yet he was a scientist-at-large spending lots of time on a lakefront property in Michigan, often in a bathing suit. Through a series of both deliberate and accidental interactions in the past two years with scientists at Columbia University, Florida State University, and the National Institutes of Health, Betzig was able to assemble a collaborative team and identify the technological pieces that he and Hess needed to realize what would become known as PALM.

He and Hess actually built the first instrument in Hess's living room, according to the article. The key was to have a relatively dense field of fluorescent molecules, but to only have a sparse array of them emitting at any one time. That way you can build up enough information for a detailed picture through multiple rounds of detection, and satisfy both limits at the same time. Even someone totally outside the field can realize that this was a really, really good plan. Betzig describes very accurately the feeling that a scientist gets when an idea like this hits: it seems so simple, and so obvious, that you're sure that everyone else in the field must have been hit by it at the same time, or will be in the next five minutes or so. In this case, he wasn't far off: several other groups were working on similar schemes while he and Hess were commandeering space in that living room. (Here's a video of Hess and Betzig talking about their collaboration).
PALM.jpg
Shown here is what the technique can accomplish - this is from the 2006 paper in Science that introduced it to the world. Panel A is a section of a lysozome, with a labeled lysozyme protein. You can say that yep, the enzyme is in the outer walls of that structure (and not so many years ago, that was a lot to be able to say right there). But panel B is the same image done through Betzig's technique, and holy cow. Take a look at that small box near the bottom of the panel - that's shown at higher magnification in panel D, and the classic diffraction limit isn't much smaller than that scale bar. As I said earlier, if you'd tried to sell people on an image like this back in the early 1990s, they'd probably have called you a fraud. It wasn't thought possible.

The Betzig technique is called PALM, and the others that came along at nearly the same time are STORM, fPALM, and PAINT. These are still being modified all over the place, and other techniques like total internal reflection fluorescence (TIRF) are providing high resolution as well. As was widely mentioned when green fluorescent protein was the subject of the 2008 Nobel, we are currently in a golden (and green, and red, and blue) age of cellular and molecular imaging. (Here's some of Betzig's recent work, for illustration). It's wildly useful, and today's prize was well deserved.

Comments (42) + TrackBacks (0) | Category: Biological News | Chemical Biology | Chemical News

October 7, 2014

German Pharma, Or What's Left of It

Email This Entry

Posted by Derek

Busy day around here on the frontiers of science, so I haven't had a chance to get a post up. A reader did send along this article from the Frankfurter Allgemeine Zeitung, the heavyweight German newspaper known as the "Fahts" (FAZ). (The Chrome browser will run Google's auto-translate past it if you ask, and it comes out sort of coherent).

What they're asking is: how and why did the German pharmaceutical industry decline so much? Parts have been sold off (as with Hoechst and BASF), and some remaining players have merged (as with Bayer and Schering AG). There's still Boehringer and Merck (Darmstadt), but they're fairly far down the rankings in size and drug R&D expenditure. And you don't have to compare things just to the US: all this has taken place while the folks just down up the river (Novartis and Roche) have looked much stronger. The article is blaming "Wankelmut" (vacillation, fickleness) at the strategic level for much of this, especially regarding the role of an industrial chemicals division versus a pharma one.

There's something to that. Bayer was urged for years and years by analysts to break up the company, and resisted. Until recently - but now they're going to do it. Meanwhile, the other big German chemistry conglomerates did just that, but divested their pharma ends off to other companies (and countries) rather than spinning them out on their own. And there's not much of a German startup/biotech sector backstopping any of this, either. The successes of Amgen, Biogen, Genentech et al. have not happened in Germany - for the most part, players there stay where they are. The big firms stay the big firms, and no one joins their ranks.

And that's what strikes me about many economies in general, as compared to the US. We have more turmoil. It's not always a good thing, but we're also had a lot of science and technology-based companies come out of nowhere to become world leaders. And you can't do that without shaking things around. Is it partly an aversion to that sort of disruption that's led to the current state of affairs, or is this mistaking symptoms for causes? (I mean, the Swiss are hardly known for wild swings in their business sectors, but Swiss pharma has done fine). Thoughts?

Comments (30) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

October 6, 2014

Sunesis Fails with Vosaroxin

Email This Entry

Posted by Derek

When last heard from, Sunesis was trying to get some last compounds through clinical trials, having cut everything else possible along the way (and having sold more shares to raise cash).

Their lead molecule has been vosaroxin (also known as voreloxin and SNS-595), a quinolone which has been in trials for leukemia. Unfortunately, the company said today that the Phase III trial failed to meet its primary endpoint (and the stock's behavior reflects that, thoroughly). The company's trying to make what it can out of secondary endpoints and possible effects in older patients, but the market doesn't seem to be buying it.

Comments (13) + TrackBacks (0) | Category: Business and Markets | Clinical Trials

A New Way to Estimate a Compound's Chances?

Email This Entry

Posted by Derek

Just a few days ago we were talking about whether anything could be predicted about a molecule's toxicity by looking over its biophysical properties. Some have said yes, this is possible (that less polar compounds tend to be more toxic), but a recent paper has said no, that no such correlation exists. This is part of the larger "Rule of 5" discussion, about whether clinical success in general can be (partially) predicted by such measurements (lack of unexpected toxicity is a big factor in that success). And that discussion shows no sign of resolving any time soon, either.

Now comes a new paper that lands right in the middle of this argument. Douglas Kell's group at Manchester has analyzed a large data set of known human metabolites (the Recon2 database, more here) and looked at how similar marketed drugs are to the structures in it. Using MACCS structural fingerprints, they find that 90% of marketed drugs have a Tanimoto similarity of more than 0.5 to at least one compound in the database, and suggest that this could be a useful forecasting tool for new structures.

Now, that's an interesting idea, and not an implausible one, either. But the next things to ask are "Is it valid?" and "What could be wrong with it?" That's the way we learn how to approach pretty much anything new that gets reported in science, of course, although people do tend to take it the wrong way around the dinner table. Applying that in this case, here's what I can think of that could be off:

1. Maybe the reason that everything looks like one of the metabolites in the database is that the database contains a bunch of drug metabolites to start with, perhaps even the exact ones from the drugs under discussion? This isn't the case, though: Recon2 contains endogenous metabolites only, and the Manchester group went through the list removing compounds that are listed as drugs but are also known metabolites (nutritional supplements, for the most part).

2. Maybe Tanimoto similarities aren't the best measurement to use, and overestimate things? Molecular similarity can be a slippery concept, and different people often mean different things by it. The Tanimoto coefficient is the ratio of shared features of two molecules to their unique features, so a Tanimoto of 1 means that the two are identical. What does a coefficient of 0.5 tell us? That depends on how those "features" are counted, as one could well imagine, and the various ones are usually referred to as compound "fingerprints". The Manchester group tried several of these, and settled on the 166 descriptors of the MACCS set. And that brings up the next potential problem. . .

3. Maybe MACCS descriptors aren't the best ones to use? I'm not enough of an informatics person to say, although this point did occur to the authors. They don't seem to know the answer, either, however:

However, the cumulative plots of the (nearest metabolite Tanimoto similarity) for each drug using different fingerprints do differ quite significantly depending on which fingerprint is used, and clearly the well-established MACCS fingerprints lead to a substantially greater degree of ‘metabolite-likeness’ than do almost all the other encodings (we do not pursue this here).

So this one is an open question - it's not for sure if there's something uniquely useful about the MACCS fingerprint set here, or if there's something about the MACCS fingerprint set that makes it just appear to be uniquely useful. The authors do note in the paper that they tried to establish that the patterns they saw were ". . .not a strange artefact of the MACCS encoding itself." And there's another possibility. . .

4. Maybe the universe of things that make this cutoff is too large to be informative? That's another way of asking "What does a Tanimoto coefficient of 0.5 or greater tell you?" The authors reference a paper (Baldi and Nasr) on that very topic, which says:

Examples of fundamental questions one would like to address include: What threshold should one use to assess significance in a typical search? For instance, is a Tanimoto score of 0.5 significant or not? And how many molecules with a similarity score above 0.5 should one expect to find? How do the answers to these questions depend on the size of the database being queried, or the type of queries used? Clear answers to these questions are important for developing better standards in chemoinformatics and unifying existing search methods for assessing the significance of a similarity score, and ultimately for better understanding the nature of chemical space.

The Manchester authors say that applying the methods of that paper to their values show that they're highly significant. I'll take their word for that, since I'm not in a position to run the numbers, but I do note that the earlier paper emphasizes that a particular Tanimoto score's significance is highly dependent on the size of the database, the variety of molecules in it, and the representations used. The current paper doesn't (as far as I can see) go into the details of applying the Baldi and Nasr calculations to their own data set, though.

The authors have done a number of other checks, to make sure that they're not being biased by molecular weights, etc. They looked for trends that could be ascribed to molecular properties like cLogP, but found none. And they tested their hypothesis by running 2000 random compounds from Maybridge through, which did indeed generate much different-looking numbers than the marketed drugs.

As for whether their overall method is useful, here's the Manchester paper's case:

. . .we have shown that when encoded using the public MDL MACCS keys, more than 90 % of individual marketed drugs obey a ‘rule of 0.5’ mnemonic, elaborated here, to the effect that a successful drug is likely to lie within a Tanimoto distance of 0.5 of a known human metabolite. While this does not mean, of course, that a molecule obeying the rule is likely to become a marketed drug for humans, it does mean that a molecule that fails to obey the rule is statistically most unlikely to do so.

That would indeed be a useful thing to know. I would guess that people inside various large drug organizations are going to run this method over their own internal database of compounds to see how it performs on their own failures and successes - and that is going to be the real test. How well it performs, though, we may not hear for a while. But I'll keep my ears open, and report on anything useful.

Comments (35) + TrackBacks (0) | Category: Drug Assays | In Silico

October 3, 2014

Meinwald Honored

Email This Entry

Posted by Derek

It's been announced today that Jerry Meinwald (emeritus at Cornell) has won the Presidential Medal of Science in chemistry. That's well deserved - his work on natural product pheromone and signaling systems has had an impact all through chemistry, biology, agricultural science, ecology, and beyond. He and Thomas Eisner totally changed the way that we look at insect behavior, among others. Their work is the foundation of the whole field of chemical ecology.

So congratulations to Prof. Meinwald for a well-deserved honor, one of many he's received in his long career. But there's one that's escaped him:I note that (the late) Prof. Eisner has a Wikipedia entry, but Meinwald doesn't, which seems to be a bizarre oversight. If I had the time, I'd write one myself - won't someone?

Update: there's a page up now, and it's being expanded.

Comments (11) + TrackBacks (0) | Category: Chemical News

Molecular Biology Turns Into Chemistry

Email This Entry

Posted by Derek

This is the sort of work that is gradually turning molecular biology into chemistry - and it's a good thing. The authors are studying the movement of a transcription factor protein along a stretch of DNA having two of its recognition sites, and using NMR to figure out how it transfers from one to the other. Does it leave one DNA molecule, dissociate into solution, and land on another? Does it slide along the DNA strand to the next site? Or does it do a sort of hop or backflip while still staying with the same DNA piece?

Now, to a first approximation, you may well not care. And I have no particular interest in the HoxD9 transcription factor myself. But I do find transcription factors in general of interest as drug targets - very hard drug targets that need all the help that they can get. The molecular-biology level of understanding starts out like this: there are protein transcription factors that bind to DNA. The protein sequences are (long list), and the DNA sequences that they recognize are (long list), and the selectivities that they show are (even longer list, and getting longer all the time). Under (huge set of specific conditions), a given transcription factor has been found to (facilitate and/or repress) expression of (huge list of specific genes). (Monstrously complex list of) associated proteins have been found to be involved in these processes.

I'm not making fun of this stuff - that's a lot of information, and people have worked very hard to get it. But as you go further into the subject, you have to stop treating these things are modules, boxy shapes on a whiteboard, or database entries, and start treating them as molecules, and that's where things have been going for some years now. When a transcription factor recognizes a stretch of DNA, how exactly does it do that? What's the structure of the protein in the DNA-recognition element, and what interactions does it make with the DNA itself? What hydrogen bonds are formed, what conformational changes occur, what makes the whole process thermodynamically favorable? If you really want to know that in detail, you're going to be accounting for every atom's position and how it got there. That is a chemistry-level understanding of the situation: it's not just Hox9a, it's a big molecule, interacting with another big molecule, and that interaction needs to be understood in all its detail.

We are not there yet. I say that without fear of contradiction, but it's clearly where we need to go, and where a lot of research groups are going. This paper is just one example; there are plenty of others. I think that all of molecular biology is getting pulled this way - studying proteins, for example, inevitably leads you to studying their structure (and to the protein folding problem), and you can't understand that without treating it at an atom-by-atom scale. Individual hydrogen bonds, water molecule interactions, pi-stacks and other such phenomena are essential parts of the process.

So down with the whiteboard shapes and the this-goes-to-this arrows. Those are OK for shorthand, but it's time we figured out what they're shorthand for.

Comments (19) + TrackBacks (0) | Category: Chemical Biology

October 2, 2014

Speaking at Northeastern

Email This Entry

Posted by Derek

For anyone at Northeastern University here in Boston who would be interested in attending, I'll be speaking this evening at 6 to the student affiliate ACS there, in Hurtig Hall (Room 115). If you'd like to attend, let them know at acs.neu@gmail.com. And don't eat my portion of pizza before I get there - I'll need it to remain coherent. Turn down too much free pizza and they'll kick you out of almost any scientific society there is, you know.

Comments (12) + TrackBacks (0) | Category: Blog Housekeeping

Catalyst Pharmaceuticals and Their Disgusting Business Strategy

Email This Entry

Posted by Derek

OK, this seems to be a new business model, damn it all. I wrote here recently about the huge price increase of Thiola (tiopronin) by a small company called Retrophin.

Now, as I wrote about here last year, another small company called Catalyst Pharmaceuticals is preparing to jack up the price of Firdapse (3,4-diaminopyridine) for the rare disorder Lambert-Eaton Myasthenic Syndrome (LEMS).

This disease is so rare, and the drug is so easily available, that it is currently being given away for free. But Catalyst is going to make sure that it won't stay free for long. Not at all:

There was never any doubt about Firdapse's ability to treat LEMS symptoms effectively because it's the same active drug as 3,4-Dap. With that perspective, Catalyst's triumphant press release Monday is all the more galling. The company took no risks with Firdapse. The company did no development work, made no effort to improve the drug's efficacy, safety or convenience for patients. The only thing Catalyst did was write a check to Biomarin and take over supervision of a Firdapse clinical trial already well underway.

For the zero work done by Catalyst, LEMS patients and their insurance companies will be paying as much as $80,000 for the exact same drug they use now for a fraction of the cost, if not gratis.

To just add a rancid cherry on top, that piece by Adam Feuerstein also details the way the company is apparently intimidating LEMS patients by telling them that they'll need to be deposed in a shareholder lawsuit. Now this is what regulatory failure looks like. I can think of no possible reason, no public good that comes from taking a drug that was easily available and working exactly as it should and have someone suddenly be able to charge $80,000 a year for it. This is not a reward for innovation or risk-taking - this is exploitation of a regulatory loophole, a blatant shakedown, or so it seems to me.

Why does the FDA let this happen? It brings the agency into disrepute, and the whole drug industry as well, and for no benefit at all. Well, unless you're the sort of person who executes one of these business plans, in which case you should get out of my sight. Too many people already think that all drug companies do is grab someone else's inexpensive compound and then raise the price as high as they possibly can. Watching people like Catalyst and Retrophin actually live the stereotype is infuriating.

Update: The previous licensee for this drug, Biomarin (in Europe) was harshly criticized for just this sort of business plan. Here is an open letter from 2010 from a group of British physicians to Prime Minister David Cameron, and its opening paragraph succinctly describes the problem here:

". . .The original purpose of this (orphan drug) legislation, passed in 1999, was to encourage drug companies to conduct research into rare diseases and develop novel treatments. However, as the rules are currently enacted, many drug companies merely address their efforts to licensing drugs that are already available rather than developing new treatments. Once a company has obtained a licence, the legislation then gives the company sole rights to supply the drug. This in turn allows the company to set an exorbitant price for this supply and effectively to bar previous suppliers of the unlicensed preparation from further production and distribution.

Similar regulatory loopholes have been used to raise the price of colchicine and hydroxyprogesterone, among others, and we can expect this to be done over and over, with every single drug that it can be done to, because the supply of people who think that this is a good idea is apparently endless. And the public backlash and the regulatory (and legislative) scrutiny that this brings down will then be distributed not just to the rent-seeking generic companies involved, but to every drug company of any type, because whatever hits the fan is never evenly spread. Do we really want this?

Postscript: In an interesting sequel to the Retrophin story, the company's board this week replaced CEO Martin Shkreli, whose sudden appearance on Reddit in response to this issue probably didn't help his position).

Comments (48) + TrackBacks (0) | Category: Drug Prices | Regulatory Affairs | The Dark Side

We Can't Calculate Our Way Out of This One

Email This Entry

Posted by Derek

Clinical trial failure rates are killing us in this industry. I don't think there's much disagreement on that - between the drugs that just didn't work (wrong target, wrong idea) and the ones that turn out to have unexpected safety problems, we incinerate a lot of money. An earlier, cheaper read on either of those would transform drug research, and people are willing to try all sorts of things to those ends.

One theory on drug safety is that there are particular molecular properties that are more likely to lead to trouble. There have been several correlations proposed between high logP (greasiness) and tox liabilities, multiple aromatic rings and tox, and so on. One rule proposed in 2008 by a group at Pfizer is that clogP >3 and total polar surface area less than 75 square angstroms is a good cutoff - compounds on the other side of it are about 2.5 times more likely to run into trouble. But here's a paper in MedChemComm that asks if any of this has any validity:

What is the likelihood of real success in avoiding attrition due to toxicity/safety from using such simple metrics? As mentioned in the beginning, toxicity can arise from a wide variety of reasons and through a plethora of complex mechanisms similar to some of the DMPK endpoints that we are still struggling to avoid. In addition to the issue of understanding and predicting actual toxicity, there are other hurdles to overcome when doing this type of historical analysis that are seldom discussed.

The first of these is making sure that you're looking at the right set of failed projects - that is, ones that really did fail because of unexpected compound-associated tox, and not some other reason (such as unexpected mechanism-based toxicity, which is another issue). Or perhaps a compound could have been good enough to make it on its own under other circumstances, but the competitive situation made it untenable (something else came up with a cleaner profile at about the same time). Then there's the problem of different safety cutoffs for different therapeutic areas - acceptable tox for a pancreatic cancer drug will not cut it for type II diabetes, for example.

The authors did a thorough study of 130 AstraZeneca development compounds, with enough data to work out all these complications. (This is the sort of thing that can only be done from inside a company's research effort - you're never going to have enough information, working from outside). What they found, right off, was that for this set of compounds the Pfizer rule was completely inverted. The compounds on the too-greasy side actually had shown fewer problems (!) The authors looked at the data sets from several different angles, and conclude that the most likely explanation is that the rule is just not universally valid, and depends on the dataset you start with.

The same thing happens when you look at the fraction of sp3 carbons, which is a characteristic (the "Escape From Flatland" paper) that's also been proposed to correlate with tox liabilities. The AZ set shows no such correlation at all. Their best hypothesis is that this is a likely correlation with pharmacokinetics that has gotten mixed in with a spurious correlation with toxicity (and indeed, the first paper on this trend was only talking about PK). And finally, they go back to an earlier properties-based model published by other workers at AstraZeneca, and find that it, too, doesn't seem to hold up on the larger, more curated data set. Their-take home message: ". . .it is unlikely that a model of simple physico-chemical descriptors would be predictive in a practical setting."

Even more worrisome is what happens when you take a look at the last few years of approved drugs and apply such filters to them (emphasis added):

To investigate the potential impact of following simple metric guidelines, a set of recently approved drugs was classified using the 3/75 rule (Table 3). The set included all small molecule drugs approved during 2009–2012 as listed on the ChEMBL website. No significant biases in the distribution of these compounds can be seen from the data presented in Table 3. This pattern was unaffected if we considered only oral drugs (45) or all of the drugs (63). The highest number of drugs ends up in the high ClogP/high TPSA class and the class with the lowest number of drugs is the low ClogP/low TPSA. One could draw the conclusion that using these simplistic approaches as rules will discard the development of many interesting and relevant drugs.

One could indeed. I hadn't seen this paper myself until the other day - a colleague down the hall brought it to my attention - and I think it deserves wider attention. A lot of drug discovery organizations, particularly the larger ones, use (or are tempted to use) such criteria to rank compounds and candidates, and many of us are personally carrying such things around in our heads. But if these rules aren't valid - and this work certainly makes it look as if they aren't - then we should stop pretending as if they are. That throws us back into a world where we have trouble distinguishing troublesome compounds from the good ones, but that, it seems, is the world we've been living in all along. We'd be better off if we just admitted it.

Comments (25) + TrackBacks (0) | Category: Drug Assays | Drug Development | In Silico | Toxicology

October 1, 2014

No More Prearranged Editors at PNAS

Email This Entry

Posted by Derek

While we're on the topic of the literature, I see that PNAS has made some changes to their system:

Although the largest number of submissions to PNAS are through the Direct Submission route, there continues to linger a general perception that to publish a paper in PNAS, an author requires sponsorship of an NAS member. In July 2010, the member Communicated route was eliminated, but to ensure and promote submission of exceptional papers that either were interdisciplinary and hence needed special attention, or were unique and perhaps ahead of their time, authors were able to use the Prearranged Editor (PE) process for Direct Submissions. The PE process was intended to be used on rare occasions but, since we formalized the PE process, more than 11,000 papers have been submitted by authors with a PE designation. Although we are certain that some of these papers truly needed special attention, the vast majority likely did not, and therefore we are discontinuing the PE process as of October 1, 2014. We will continue to honor the current PE submissions.

They're setting up a new submission process, which (from what I can see) will make the journal very much like the rest of the field. Are there any odd routes to a PNAS publication left? As for the whole literature landscape, I'm sticking with my characterizations and there are plenty more in the comments, for now.

Comments (6) + TrackBacks (0) | Category: The Scientific Literature

No, They Really Aren't Reproducible

Email This Entry

Posted by Derek

Here's an interview with Nobel winner Randy Schekman, outspoken (as usual) on the subject of the scientific literature. This part caught my attention:

We have a problem. Some people claim that important papers cannot be replicated. I think this is an argument that has been made by the drug companies. They claim that they take observations in the literature and they can't reproduce them, but what I wonder is whether they're really reproducing the experiments or simply trying to develop a drug in an animal model and not exactly repeating the experiments in the publication. But I think it is unknown what fraction of the literature is wrong, so we're conducting an experiment. We've been approached by an organization called the Reproducibility Project, where a private foundation has agreed to provide funds for experiments in the fifty most highly cited papers in cancer biology to be reproduced, and the work will be contracted out to laboratories for replication. And we've agreed to handle this and eventually to publish the replication studies, so we'll see, you know, at least with these fifty papers. How many of them really have reproducibility. The reproducibility studies will be published in eLife. We're just getting going in that, so it may be a couple of years, but that's what we'd like to do.

OK, then. As one of those drug-company people, I can tell you that we actually do try to run the exact experiment in these papers. We may run that second, after we've tried to reproduce things in our own assays and failed, but we never write things off unless we've tried to do exactly what the paper said to do. And many times, it still doesn't work. Or it gives a readout in that system, but we have strong reason to believe that the assay in the original work was flawed or misinterpreted. We are indeed trying to develop drugs, but (and I speak from personal experience here, and more than once), when we call something irreproducible, that's because we can't actually reproduce it.

And the problem with trying to reproduce the 50 most-cited papers (see Update below!) is that most of those are probably going to work pretty well. That's why they're so highly cited. The stuff that doesn't work are the splashy new press-released papers in Nature, Cell, Science, or PNAS, the one that say that Protein X turns out to be a key player in Disease Y, or that Screening Compound Z turns out to be a great ligand for it. Some of those are right, but too many of them are wrong. They haven't been in the literature long enough to pick up a mound of citations, but when we see these things, we get right to work on them to see if we believe them.

And there really are at least two layers of trouble, as mentioned. Reproducibility is one: can you get the exact thing to happen that the paper reports? That's the first step, and it fails more than it should. But if things do work as advertised, that still doesn't mean that the paper's conclusions are correct. People miss things, overinterpret, didn't run a key control, and so on. If someone reports a polyphenolic rhodanine as a wonderful ligand for The Hot Protein of the Moment, odds are that you can indeed reproduce the results in their assay. But that doesn't mean that the paper is much good to anyone at all, because said rhodanine is overwhelmingly unlikely to be of any use. Run it through an assay panel, and it lights up half the board - try interpreting cell data from that, and good luck. So you have Non-Reproducible, and Reproducible, For All the Good That Does.

But if Shekman and the Reproducibility Project are looking for tires to kick, I recommend picking the fifty papers from the last two or three years that caused the most excitement, press releases, and press coverage. New cancer target! Stem cell breakthrough! Lead structure for previously undruggable pathway! Try that stuff out, and see how much of it stands up.

Update: this interview turns out not to be quite correct about the papers that will be reproduced. More details here, and thanks to Tim in the comments for this. It's actually the "50 most impactful" papers from 2010 through 2012, which sounds a lot more like what I have in mind above. Here's the list. This will be quite interesting. . .

Comments (29) + TrackBacks (0) | Category: The Scientific Literature

September 30, 2014

A New Reductive Amination

Email This Entry

Posted by Derek

A colleague brought this new JACS paper to my attention the other day. It's a complementary method to the classic reductive amination reaction. Instead of an aldehyde and amine (giving you a new alkylated amine), in this case, you use a carboxylic acid and an amine to give you the same product, knocking things down another oxidation state along the way.

This reaction, from Matthias Beller and co-workers at Rostock, uses Karstedt's catalyst (a platinum species) with phenylsilane as reducing agent. Double bonds don't get reduced, Cbz and Boc groups survive, as do aliphatic esters. Most of the examples in the paper are on substituted anilines, but there are several aliphatic amines as well. A wide variety of carboxylic acids seem to work, including fluorinated ones. I like it - as a medicinal chemist, I'm always looking for another way to make amines, and there sure are a lot of carboxylic acids out there in the world.

Comments (17) + TrackBacks (0) | Category: Chemical News

Real-World Ebola

Email This Entry

Posted by Derek

As opposed to people who are telling Liberians to fear foreign medical help for the Ebola epidemic, here's a CNN story about a local doctor who's trying everything he has available. He has desperately ill patients, but may have seen some positive results with lamivudine (3-TC), a nucleoside analog that's been used for HIV and hepatitis B infections. It's not impossible that it might be effective against a filovirus like Ebola - an adenosine derivative, BCX4430, has just been reported with some promising drug against filovirus infection, and nucleosides in general are the first general class you think of as antivirals. (I can find no references to other reports of the compound being used against Ebola or related viruses).

So I'd like to recognize Dr. Gobee Logan, of Tubmanburg, Liberia. He and others like him are risking their lives to treat their patients under these conditions, and are having to try whatever they can in the process. If he's come across an existing drug that could be useful, even better. Given the mortality rates that have been seen in this outbreak, it's worth a try.

Comments (32) + TrackBacks (0) | Category: Infectious Diseases

September 29, 2014

The Case of Northwest Biotherapeutics

Email This Entry

Posted by Derek

There have been a lot of strong words exchanged about Northwest Biotherapeutics (NWBO), a small Maryland-based company developing a brain cancer vaccine. Over at Fierce Biotech, they're wondering why this program was picked by the UK authorities as their first official "Promising Innovative Medicine", given the scarcity of data (and the dismal track record of dendritic vaccines in the field).

Adam Feuerstein has said a bunch of similar things, vigorously, at TheStreet.com over the last few months as well. He's been especially skeptical of the company's own vigorous PR efforts, and in general tends to be unenthusiastic about small go-it-alone oncology programs. The Feuerstein-Ratain rule, that small-cap cancer trials fail, has been hard to refute.

Well, just the other day Washington Post columnist Steven Pearlstein waded into this story with a piece about how evil short-sellers are hurting promising little biotech companies. That's pretty much the tone of the whole thing, and he uses Feuerstein and NWBO as his prime example, with not-quite-stated allegations of collusion with short-sellers.

My belief is that this is a load of crap, from someone who doesn't understand very much about how the stock market works. Small companies that have been unable to interest anyone else in their technologies have a difficult time of it, to be sure. But we don't need to go to conspiracy theories to explain this. There are indeed short-selling investors who are trying to drive stocks down, but they are absolutely overwhelmed in number by the number of people who are trying to drive stocks up. That's what a stock market is: differences of opinion, held strongly enough for money to be put down on them.

If you look at Feuerstein's most recent column on NWBO, you find that only one other company has even applied for the "Promising Innovative Medicine" designation (and that application is in process). So this is not some incredible milestone. And you also find a lot of useful information on the company's debt structure, the exact sort of thing that an investor in the company should be interested in. Will you get these details by reading press releases from Northwest Biotherapeutics? You will not. You will get them from people who are willing to scrutinize a company, its operations, and its pipeline in detail.

Does Steven Pearlstein think that these details about NWBO's debt deal are false? He should say so. But he also talks about short-sellers crippling Dendreon, which ignores completely the fact that what's crippled Dendreon is that their vaccine doesn't work very well. Wonderful drugs don't get buried by short-sellers. Drugs get buried by data.

Update: TheStreet.com is now seeking a retraction from Pearlstein. One of the key sentences is "Mr. Pearlstein -- who said he knew nothing about biotech or medicine . . ." Pretty much had that part figured out already. The letter to the Post goes on to claim a number of other serious deficiencies with Pearlstein's reporting. It's going to be interesting to see where this leads. . .

Comments (42) + TrackBacks (0) | Category: Business and Markets | Cancer | Clinical Trials

2014 Chemistry Nobel Predictions

Email This Entry

Posted by Derek

Well, we're getting close to the Nobel season, so it's time for the yearly "Who's going to win?" post. According to Thomson Reuters, some favorites are Tan/van Slyke for organic light-emitting diodes, Moad/Rizzardo/Thang for RAFT polymerization, and Kresge/Ryoo/Stucky for mesoporous materials. You can see a real materials-science drift to those picks, which would indicate that the Thomson-Reuters folks think that we're not going to get another that's-not-chemistry-that's-biology award this year (nor one in analytical chemistry).

But if they're wrong about that, there are several things that shade over into molecular biology that are queued up. Some sort of prize for nuclear receptors would be plausible, and the CRISPR gene editing technology is surely in line for one. Another surely-that-will-win technique is optogenetics, the photoswitchable gene regulation method that's being used all over biology. They could always give it to Venter (et al.) for gene sequencing, or to Bruce Ames for the Ames test. As usual, these could end up in chemistry, or over in the physiology/medicine prize. In the zone where analytical chemistry blends into physics, there's single-molecule-spectroscopy and SPR. I don't see a flat-out organic chemistry prize in the works, but Sharpless is still plausible as part of a click-chemistry/chemical biology sort of award.

Other predictions can be found at Wavefunction's blog (he has a different top pick) and Everyday Scientist. Add your own guesses to the comments section, and we'll see how wrong we all can be!

Comments (69) + TrackBacks (0) | Category: Chemical News

September 26, 2014

The Deadly Stupidities Around Ebola

Email This Entry

Posted by Derek

It's not all that often that you can say "Now this is a person who's going to get people killed". But I'm willing to say that about Cyril Broderick of Delaware State. He's Liberian-born, and has written an article for a newspaper in Monrovia telling Liberians that the Ebola virus is a manufactured bioweapon from the pharmaceutical companies and the US Department of Defense. And he goes on to say the the WHO, Doctors Without Borders, and the CDC are all in on the plot. Isn't that nice?

This in a region where suspicions run so high that doctors, officials, and aid workers are being killed by angry mobs already. Now Prof. Broderick has given his Liberian countrymen more reason to fear some of the people who are best equipped, of anyone on this suffering world, to actually help them. If GSK's Ebola vaccine actually proves effective, if a rapacious evil pharma company actually comes up with a way to stop the disease and turns it over to people like the WHO and Doctors Without Borders to go into West Africa and administer it, stuff like what Professor Broderick is spewing will make it that much harder to accomplish anything with it. People will hide rather than get vaccinated and attack the people coming in to save their lives. Broderick and Matthias Rath, who's urged HIV patients to throw away their retroviral drugs, are in the same category and I am ashamed to be on the same planet with them.

And all because of a bunch of stuck-together conspiracy theories, chew-toys for halfwits. It would be easier to laugh at if it weren't getting people killed, patients and medical workers alike. Schiller was right: against stupidity, the Gods themselves fight in vain.

Comments (47) + TrackBacks (0) | Category: Snake Oil

PAINS Go Mainstream

Email This Entry

Posted by Derek

Well, I'm back in the Eastern Time Zone after flying in from Basel (and Amsterdam) yesterday. And the first thing I wanted to mention was this article from Jonathan Baell and Michael Walters inNature, on the PAINS compounds. It's good to see the journal cover this issue (and I was impressed that they got New Yorker cartoonist Roz ChastRoz Chast to illustrate it).

PAINS are, of course, nasty frequent-hitting compounds that should be approached with great caution in any sort of screen for activity. This topic has come up many times on the blog (for someone writing about chemistry and drug discovery, there's no way it couldn't have), most recently just a few weeks ago. There are a lot of these things out in the literature (and the catalogs), and they just keep on coming. Now a wider audience gets to hear about the problem:

Academic researchers, drawn into drug discovery without appropriate guidance, are doing muddled science. When biologists identify a protein that contributes to disease, they hunt for chemical compounds that bind to the protein and affect its activity. A typical assay screens many thousands of chemicals. ‘Hits’ become tools for studying the disease, as well as starting points in the hunt for treatments.

But many hits are artefacts — their activity does not depend on a specific, drug-like interaction between molecule and protein. A true drug inhibits or activates a protein by fitting into a binding site on the protein. Artefacts have subversive reactivity that masquerades as drug-like binding and yields false signals across a variety of assays.

That's the problem, all right. It's not like ugly-looking compounds can never become drugs, and it's not like they can't be starting points for research. But the odds are against them, and you have to realize that, and you also have to realize why this "hit" you've just uncovered may well be spurious (at worst) or need a lot of extra work (at best). Far, far too many papers from less experienced research teams seem to be oblivious to these concerns. Compound hits? Compound good!

Appropriately, this piece calls out the rhodanines as perfect examples of the problem:

Rhodanines exemplify the extent of the problem. A literature search reveals 2,132 rhodanines reported as having biological activity in 410 papers, from some 290 organizations of which only 24 are commercial companies. The academic publications generally paint rhodanines as promising for therapeutic development. In a rare example of good practice, one of these publications (by the drug company Bristol-Myers Squibb) warns researchers that these types of compound undergo light-induced reactions that irreversibly modify proteins. It is hard to imagine how such a mechanism could be optimized to produce a drug or tool. Yet this paper is almost never cited by publications that assume that rhodanines are behaving in a drug-like manner.

Very occasionally, a PAINS compound does interact with a protein in a specific drug-like way. If it does, its structure could be optimized through medicinal chemistry. However, this path is fraught — it can be difficult to distinguish when activity is caused by a drug-like mechanism or something more insidious. Rhodanines also occur in some 280 patents, a sign that they have been selected for further drug development. However, to our knowledge, no rhodanine plucked out of a screening campaign is in the clinic or even moving towards clinical development. We regard the effort to obtain and protect these patents (not to mention the work behind them) as a waste of money.

Yeah, I wouldn't spend much on trying to stake a claim to these things, either. If you haven't done much screening, you may not appreciate just how many false positives are out there (and for difficult targets, how few real positives there may be). I see people in the literature screening little libraries of a few thousand compounds from a catalog and reporting hit after hit, even in very tricky systems, while in industry we're used to running hundreds of thousands of compounds past some of these things and coming up with squat. Well, after checking the "hits" for purity, aggregation behavior, reactivity, and profiles from past screening campaigns, that is.

Here's the sad truth: If you're doing a small-molecule screen to affect transcription factors, protein-protein targets, or anything in general that doesn't have an evolutionary optimized small-molecule binding site, you'd better assume that the vast majority of any hits you get are false positives. There's almost no way that they can be anything else. The true hit rate for some of these things against any sort of typical compound collection is damn near zero, which means that the ways your compounds can be wrong far outnumber the ways that they can be right.

Every single hit, for any assay, should be regarded with appropriate suspicion. Purity check first, LC/MS and NMR. Is it what it says on the label? You might be surprised how often it isn't (or isn't any more, even if it started out OK). If you have solid material and DMSO stock, check both of them, because things diverge on storage. It's a very good idea to take your interesting hits, run them through a plug of silica gel, and test them again. That's especially true if they have any color to them (but keep in mind, some assay-killing contaminants are completely colorless). The gold standard is resynthesis: if you can make the compound again and purify it, and it still works, you at least know you can trust it that far. If you can't, well, how exactly is this compound going to do anyone any good?

Note that we haven't even gotten to the PAINS yet. There are a lot of clean, accurately labeled compounds that should be chucked into the waste can, too, which is where the Baell PAINS list comes in. You're going to want to check for aggregation: run your assay with some detergent in it, or do some dynamic light scattering or any of several other techniques. A lot of false-positive compounds are aggregators, and you can't completely predict which ones they might be (it varies according to assay conditions).

You're also going to want to run your hits through some other assays. How promiscuous are they? If you have access to data from multiple screening campaigns with the same compound collection, good for you. If you don't, you should strongly consider sending your hot compound(s) out for a commercial screening panel. Don't just pick the similar targets to screen - you want those, of course, but you want all kinds of other stuff. If a compound hits against widely disparate protein classes, it's a PAIN, and is set to cause trouble. Don't assume that they're clean - don't assume that any compound is clean, because it almost certainly isn't. That goes for marketed drugs, too - the question is, does it have selectivity that you can live with, or not?

Those are the big tests, and believe me, they'll clear out your initial list of screening hits for you. If your target is a tough one to start with, they may well clear out everything. Better that, though, than working on (and publishing) crap.

Comments (15) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Assays

September 25, 2014

En Route

Email This Entry

Posted by Derek

Traveling today, so no time for a blog entry. More science and stuff tomorrow, though!

Comments (4) + TrackBacks (0) | Category: Blog Housekeeping

September 24, 2014

Luc Montagnier Makes His Case in Paris

Email This Entry

Posted by Derek

Well, if you're in Paris next month and want to see some scientific (or perhaps not so scientific?) fireworks at a seminar, here's the event to attend. UNESCO is holding a symposium on the work of Luc Montagnier on his "water memory" studies. I've mentioned this (and some of his other. . .unusual. . .claims) here before, and well. . .it's hard for me to say this, but they are indistinguishable from the work of a crank. Or someone with an unfortunate mental condition. I'm sure that Montagnier gets these kinds of responses all the time, and he obviously is strong enough to keep going with what he believes to be real results, so I have to give him credit for that. But extraordinary claims and extraordinary evidence, you know, and I haven't seen much of the latter.

The mathematician Cédric Villani, who received the Fields Medal in 2010, will propose a synthesis of the various presentations. He will include them in the broader context of Professor Jacques Benveniste’s work (1935-2004) on the "memory of water", which was initiated thirty years ago.

Professor Montagnier’s team is working on electromagnetic waves emitted in the area of very low frequencies and thus of low energy. Different reproducible experiments will be presented at the conference. These experiments show that these waves may play an important role in the pathogenicity of micro-organisms - bacteria and viruses – and also in physiological processes such as stem cell differentiation shown by Professor Carlo Ventura.

The experimental facts will be presented by the two biologists. It appears that water is an important mediator in the transmission of molecular information, such as that carried by DNA. To achieve such transmission, water generates organized structures, which also emit electromagnetic signals. Marc Henry and Giuseppe Vitiello, relying on concepts developed by Italian physicists Giuliano Preparata and Emilio Del Giudice, will explain how quantum physics can elucidate these mysterious phenomena. They will reveal new fields of research that are areas of consistency activating water molecules. Interdisciplinarity (physics/biology) is the conference’s major message.

The promoters of this conference are aware of the critical reactions aroused by this work in parts of the scientific community, so they wish to communicate their results with the utmost rigor. . .

Utmost rigor might not be enough. If anyone makes it to this, please send a report!

Comments (21) + TrackBacks (0) | Category: Snake Oil