About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: email@example.com
In the Pipeline:
Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline
December 4, 2013
Trying to find tenants for the former AstraZeneca campus at Charnwood. A few buildings are being demolished to make room, and they're hoping for biomedical researchers to move in. I hope that works; it seems like a good research site. I'm not sure that trying to sell it as ". . .perfectly located between Leicester, Nottingham and Derby" is as good a pitch as can be made, but there are worse ones.
+ TrackBacks (0) | Category: Drug Industry History
Seth Mnookin's The Panic Virus is an excellent overview of the vaccine/autism arguments that raged for many years (and rage still in the heads of the ignorant - sorry, it's gotten to the point where there's no reason to spare anyone's feelings about this issue). Now in this post at PLOS Blogs, he's alerting people to another round of the same stuff, this time about the HPV vaccine:
Over a period of about a month, (Katie Couric's) producer and I spoke for a period of several hours before she told me that the show was no longer interesting in hearing from me on air. Still, I came away from the interaction somewhat heartened: The producer seemed to have a true grasp of the dangers of declining vaccination rates and she stressed repeatedly that her co-workers, including Couric herself, did not view this as an “on the one hand, on the other hand” issue but one in which facts and evidence clearly lined up on one side — the side that overwhelmingly supports the importance and efficacy of vaccines.
Apparently, that was all a load of crap.
Read on for more. One piece of anecdotal data trumps hundreds of thousands of patients worth of actual data, you know. Especially if it's sad. Especially if it gets ratings.
+ TrackBacks (0) | Category: Autism | Infectious Diseases | Snake Oil
Here's some work that gets right to the heart of modern drug discovery: how are we supposed to deal with the variety of patients we're trying to treat? And the variety in the diseases themselves? And how does that correlate with our models of disease?
This new paper, a collaboration between eight institutions in the US and Europe, is itself a look at two other recent large efforts. One of these, the Cancer Genome Project, tested 138 anticancer drugs against 727 cell lines. Its authors said at the time (last year) that "By linking drug activity to the functional complexity of cancer genomes, systematic pharmacogenomic profiling in cancer cell lines provides a powerful biomarker discovery platform to guide rational cancer therapeutic strategies". The other study, the Cancer Cell Line Encyclopedia, tested 24 drugs against 1,036 cell lines. That one appeared at about the same time, and its authors said ". . .our results indicate that large, annotated cell-line collections may help to enable preclinical stratification schemata for anticancer agents. The generation of genetic predictions of drug response in the preclinical setting and their incorporation into cancer clinical trial design could speed the emergence of ‘personalized’ therapeutic regimens."
Well, will they? As the latest paper shows, the two earlier efforts overlap to the extent of 15 drugs, 471 cell lines, 64 genes and the expression of 12,153 genes. How well do they match up? Unfortunately, the answer is "Not too well at all". The discrepancies really come out in the drug sensitivity data. The authors tried controlling for all the variables they could think of - cell line origins, dosing protocols, assay readout technologies, methods of estimating IC50s (and/or AUCs), specific mechanistic pathways, and so on. Nothing really helped. The two studies were internally consistent, but their cross-correlation was relentlessly poor.
It gets worse. The authors tried the same sort of analysis on several drugs and cell lines themselves, and couldn't match their own data to either of the published studies. Their take on the situation:
Our analysis of these three large-scale pharmacogenomic studies points to a fundamental problem in assessment of pharmacological drug response. Although gene expression analysis has long been seen as a source of ‘noisy’ data, extensive work has led to standardized approaches to data collection and analysis and the development of robust platforms for measuring expression levels. This standardization has led to substantially higher quality, more reproducible expression data sets, and this is evident in the CCLE and CGP data where we found excellent correlation between expression profiles in cell lines profiled in both studies.
The poor correlation between drug response phenotypes is troubling and may represent a lack of standardization in experimental assays and data analysis methods. However, there may be other factors driving the discrepancy. As reported by the CGP, there was only a fair correlation (rs < 0.6) between camptothecin IC50 measurements generated at two sites using matched cell line collections and identical experimental protocols. Although this might lead to speculation that the cell lines could be the source of the observed phenotypic differences, this is highly unlikely as the gene expression profiles are well correlated between studies.
Although our analysis has been limited to common cell lines and drugs between studies, it is not unreasonable to assume that the measured pharmacogenomic response for other drugs and cell lines assayed are also questionable. Ultimately, the poor correlation in these published studies presents an obstacle to using the associated resources to build or validate predictive models of drug response. Because there is no clear concordance, predictive models of response developed using data from one study are almost guaranteed to fail when validated on data from another study, and there is no way with available data to determine which study is more accurate. This suggests that users of both data sets should be cautious in their interpretation of results derived from their analyses.
"Cautious" is one way to put it. These are the sorts of testing platforms that drug companies are using to sort out their early-stage compounds and projects, and very large amounts of time and money are riding on those decisions. What if they're gibberish? A number of warning sirens have gone off in the whole biomarker field over the last few years, and this one should be so loud that it can't be ignored. We have a lot of issues to sort out in our cell assays, and I'd advise anyone who thinks that their own data are totally solid to devote some serious thought to the possibility that they're wrong.
Here's a Nature News summary of the paper, if you don't have access. It notes that the authors of the two original studies don't necessarily agree that they conflict! I wonder if that's as much a psychological response as a statistical one. . .
+ TrackBacks (0) | Category: Biological News | Cancer | Chemical Biology | Drug Assays
December 3, 2013
Interesting science-gift ideas can be found in the "home experiments" area. There's been a small boom in this sort of book in recent years, which I think is a good thing all the way around. I believe that there's a good audience out there of people who are interested in science, but have no particular training in it, either because they're young enough not to have encountered much (or much that was any good), or because they missed out on it while they were in school themselves.
Last year I mentioned Robert Bruce (and Barbara) Thompson's Illustrated Guide to Home Chemistry Experiments along with its sequels, the Illustrated Guide to Home Biology Experiments and the Illustrated Guide to Home Forensic Science Experiments. Similar books are Hands-On Chemistry Activities and its companion Hands-On Physics Activities.
Related to these are two from Theodore Gray: Theo Gray's Mad Science, and its new sequel, Mad Science 2. Both of these are subtitles "Experiments that you can do at home - but probably shouldn't", and I'd say that's pretty accurate. Many of these use equipment and materials that most people probably won't have sitting around, and some of the experiments are on the hazardous side (which, I should mention, is something that's fully noted in the book). But they're well-illustrated from Gray's own demonstration runs, so you can at least see what they look like, and learn about the concepts behind them.
And there's copious chemistry available in a series of books by Bassam Shakhashiri, whose web site is here. These are aimed at people teaching chemistry who would like clear, tested demonstrations for their students, but if you know someone who's seriously into home science experimentation, they'll find a lot here. The most recent, Chemical Demonstrations, Volume 5, concentrates on colors and light. The previous ones are also available, and cover a range of topics in each book: Volume 4, Volume 3, Volume 2, and Volume 1.
+ TrackBacks (0) | Category: Book Recommendations | Science Gifts
The sleazy scientific publishing racket continues to plumb new depths in its well-provisioned submarine. Now comes word of "Stringer Open" - nope, not Springer Open, that one's a real publisher of real journals. This outfit is Stringer, which is a bit like finding a list of journals published by the American Comical Society. The ScholarlyOA blog noticed that the same person appears on multiple editorial boards across their various journals. When contacted, she turned out to be a secretary who's never heard of "Stringer". Class all the way. The journals themselves will be populated by the work of dupes and/or con artists - maybe some of those Chinese papers-for-rent can be stuffed in there to make a real lasagna of larceny out of the whole effort.
+ TrackBacks (0) | Category: The Dark Side | The Scientific Literature
The New Yorker has an article about Merck's discovery and development of suvorexant, their orexin inhibitor for insomnia. It also goes into the (not completely reassuring) history of zolpidem (known under the brand name of Ambien), which is the main (and generic) competitor for any new sleep drug.
The piece is pretty accurate about drug research, I have to say:
John Renger, the Merck neuroscientist, has a homemade, mocked-up advertisement for suvorexant pinned to the wall outside his ground-floor office, on a Merck campus in West Point, Pennsylvania. A woman in a darkened room looks unhappily at an alarm clock. It’s 4 a.m. The ad reads, “Restoring Balance.”
The shelves of Renger’s office are filled with small glass trophies. At Merck, these are handed out when chemicals in drug development hit various points on the path to market: they’re celebrations in the face of likely failure. Renger showed me one. Engraved “MK-4305 PCC 2006,” it commemorated the day, seven years ago, when a promising compound was honored with an MK code; it had been cleared for testing on humans. Two years later, MK-4305 became suvorexant. If suvorexant reaches pharmacies, it will have been renamed again—perhaps with three soothing syllables (Valium, Halcion, Ambien).
“We fail so often, even the milestones count for us,” Renger said, laughing. “Think of the number of people who work in the industry. How many get to develop a drug that goes all the way? Probably fewer than ten per cent.”
I well recall when my last company closed up shop - people in one wing were taking those things and lining them up out on a window shelf in the hallway, trying to see how far they could make them reach. Admittedly, they bulked out the lineup with Employee Recognition Awards and Extra Teamwork awards, but there were plenty of oddly shaped clear resin thingies out there, too.
The article also has a good short history of orexin drug development, and it happens just the way I remember it - first, a potential obesity therapy, then sleep disorders (after it was discovered that a strain of narcoleptic dogs lacked functional orexin receptors).
Mignot recently recalled a videoconference that he had with Merck scientists in 1999, a day or two before he published a paper on narcoleptic dogs. (He has never worked for Merck, but at that point he was contemplating a commercial partnership.) When he shared his results, it created an instant commotion, as if he’d “put a foot into an ants’ nest.” Not long afterward, Mignot and his team reported that narcoleptic humans lacked not orexin receptors, like dogs, but orexin itself. In narcoleptic humans, the cells that produce orexin have been destroyed, probably because of an autoimmune response.
Orexin seemed to be essential for fending off sleep, and this changed how one might think of sleep. We know why we eat, drink, and breathe—to keep the internal state of the body adjusted. But sleep is a scientific puzzle. It may enable next-day activity, but that doesn’t explain why rats deprived of sleep don’t just tire; they die, within a couple of weeks. Orexin seemed to turn notions of sleep and arousal upside down. If orexin turns on a light in the brain, then perhaps one could think of dark as the brain’s natural state. “What is sleep?” might be a less profitable question than “What is awake?”
There's also a lot of good coverage of the drug's passage through the FDA, particularly the hearing where the agency and Merck argued about the dose. (The FDA was inclined towards a lower 10-mg tablet, but Merck feared that this wouldn't be enough to be effective in enough patients, and had no desire to launch a drug that would get the reputation of not doing very much).
few weeks later, the F.D.A. wrote to Merck. The letter encouraged the company to revise its application, making ten milligrams the drug’s starting dose. Merck could also include doses of fifteen and twenty milligrams, for people who tried the starting dose and found it unhelpful. This summer, Rick Derrickson designed a ten-milligram tablet: small, round, and green. Several hundred of these tablets now sit on shelves, in rooms set at various temperatures and humidity levels; the tablets are regularly inspected for signs of disintegration.
The F.D.A.’s decision left Merck facing an unusual challenge. In the Phase II trial, this dose of suvorexant had helped to turn off the orexin system in the brains of insomniacs, and it had extended sleep, but its impact didn’t register with users. It worked, but who would notice? Still, suvorexant had a good story—the brain was being targeted in a genuinely innovative way—and pharmaceutical companies are very skilled at selling stories.
Merck has told investors that it intends to seek approval for the new doses next year. I recently asked John Renger how everyday insomniacs would respond to ten milligrams of suvorexant. He responded, “This is a great question.”
There are, naturally, a few shots at the drug industry throughout the article. But it's not like our industry doesn't deserve a few now and then. Overall, it's a good writeup, I'd say, and gets across the later stages of drug development pretty well. The earlier stages are glossed over a bit, by comparison. If the New Yorker would like for me to tell them about those parts sometime, I'm game.
+ TrackBacks (0) | Category: Clinical Trials | Drug Development | Drug Industry History | The Central Nervous System
December 2, 2013
Academic publishing fraud in China has come up here before, but Science has an in-depth look at the problem. And a big problem it is:
"There are some authors who don't have much use for their papers after they're published, and they can be transferred to you," a sales agent for a company called Wanfang Huizhi told a Science reporter posing as a scientist. Wanfang Huizhi, the agent explained, acts as an intermediary between researchers with forthcoming papers in good journals and scientists needing to snag publications. The company would sell the title of co–first author on the cancer paper for 90,000 yuan ($14,800). Adding two names—co–first author and co–corresponding author—would run $26,300, with a deposit due upon acceptance and the rest on publication. A purported sales document from Wanfang Huizhi obtained by Science touts the convenience of this kind of arrangement: "You only need to pay attention to your academic research. The heavy labor can be left to us. Our service can help you make progress in your academic path!"
For anyone who cares about science and research, this is revolting. If you care a lot more about climbing that slippery ladder up to a lucrative position, though, it might be just the thing, right? There are all sorts of people ready to help you realize your dreams, too:
The options include not just paying for an author's slot on a paper written by other scientists but also self-plagiarizing by translating a paper already published in Chinese and resubmitting it in English; hiring a ghostwriter to compose a paper from faked or independently gathered data; or simply buying a paper from an online catalog of manuscripts—often with a guarantee of publication.
Offering these services are brokers who hawk titles and SCI paper abstracts from their perches in China; individuals such as a Chinese graduate student who keeps a blog listing unpublished papers for sale; fly-by-night operations that advertise online; and established companies like Wanfang Huizhi that also offer an array of above-board services, such as arranging conferences and producing tailor-made coins and commemorative stamps. Agencies boast at conferences that they can write papers for scientists who lack data. They cold-call journal editors. They troll for customers in chat programs. . .
The journal contacted 27 agencies in China, with reporters posing as graduate students or other scientists, and asked about paying to get on a list of authors or paying to have a paper written up from scratch. 22 of them were ready to help with either or both. Many of these were to be placed in Chinese-language journals, but for a higher fee you could get into more international titles as well. Because of Chinese institutional insistence on high-impact-factor journal publications, people who can deliver that kind of publication can charge as much as a young professor's salary. (Since some institutions turn around and pay a bonus for such publications, though, it can still be feasible).
Some agencies claim they not only prepare and submit papers for a client: They furnish the data as well. "IT'S UNBELIEVABLE: YOU CAN PUBLISH SCI PAPERS WITHOUT DOING EXPERIMENTS," boasts a flashing banner on Sciedit's website.
One timesaver: a ready stock of abstracts at hand for clients who need to get published fast. Jiecheng Editing and Translation entices clients on its website with titles of papers that only lack authors. An agency representative told an undercover Science reporter that the company buys data from a national laboratory in Hunan province.
The article goes on to show that there are many Chinese scientists that are trying to do something about all this. I hope that they succeed, but it's going to take a lot of work to realign the incentives. Unless this happens, though, the Chinese-language scientific literature risks finding itself devolving into a bad joke, and papers from Chinese institutions risk having to go through extra levels of scrutiny when submitted abroad.
+ TrackBacks (0) | Category: The Dark Side | The Scientific Literature
Getting the week off to a bad start is this news from Eisai. They're stopping small-molecule work at their site in Andover, and (like everyone else, it seems) chopping med-chem at their UK site as well. Worldwide, it looks like a loss of 130 positions.
+ TrackBacks (0) | Category: Business and Markets
November 29, 2013
I hope my readers who celebrated Thanksgiving yesterday had a good one. Everything went well here, and there are plenty of turkey leftovers today. My wife always looks forward to a sandwich of turkey in a flour tortilla with hoisin sauce and fresh scallions. I can endorse that one, and I'm also a fan of turkey on pumpernickel with mayonnaise and horseradish. But to each their own! It's a big country, and can accommodate turkey quesadillas, turkey with mango pickle and naan, turkey with barbecue sauce, and who knows what else.
Over the next week or two, as I did last year, I'll be posting some science-themed gift ideas along with my regular postings. I should mention, as I do from time to time, that this blog is an Amazon affiliate, so links to Amazon from here will earn a small commission, at no change in the price on the buyer's end. So if you have some big online shopping to do, I encourage you to pick a blog or site that you've enjoyed during the year and use their affiliate links if they have them - everything that's ordered after such a redirect will send some money back to the site's owner. In my own case, I pledge to use a significant part of any proceeds to buy still more books, thereby stuffing my head with even more marginally useful knowledge.
I'll start off with gifts that you might well be ordering for yourself - books on medicinal chemistry and related fields. This is an updated version of the list I posted last year, with some additions.
At various times, I've asked the readership for the best books on the practice of medicinal chemistry and drug discovery. Here are the favorites mentioned by readers over the last few years (nominations for others are welcome):
For general medicinal chemistry, you have Bob Rydzewski's Real World Drug Discovery: A Chemist's Guide to Biotech and Pharmaceutical Research. Another recommendation is Textbook of Drug Design and Discovery by Krogsgaard-Larsen et al. Many votes also were cast for Camille Wermuth's The Practice of Medicinal Chemistry. For getting up to speed, several readers recommend Graham Patrick's An Introduction to Medicinal Chemistry. And an older text that has some fans is Richard Silverman's The Organic Chemistry of Drug Design and Drug Action.
Process chemistry is its own world with its own issues. Recommended texts here are Practical Process Research & Development by Neal Anderson, Repic's Principles of Process Research and Chemical Development in the Pharmaceutical Industry, and Process Development: Fine Chemicals from Grams to Kilograms by Stan Lee (no, not that Stan Lee) and Graham Robinson. On an even larger scale, McConville's The Pilot Plant Real Book comes recommended by readers here, too.
Case histories of successful past projects can be found in Drugs: From Discovery to Approval by Rick Ng and also in Walter Sneader's Drug Discovery: A History.
Another book that focuses on a particular (important) area of drug discovery is Robert Copeland's Evaluation of Enzyme Inhibitors in Drug Discovery. This is a new edition of the book recommended in this post last year.
Another newer book on a particular area of med-chem is Bioisosteres in Medicinal Chemistry by Brown et al., which also comes recommended by several readers.
For chemists who want to brush up on their biology, readers recommend Terrence Kenakin's A Pharmacology Primer, Third Edition: Theory, Application and Methods, Cannon's Pharmacology for Chemists, and Molecular Biology in Medicinal Chemistry by Nogrady and Weaver.
Overall, one of the most highly recommended books across the board comes from the PK end of things: Drug-like Properties: Concepts, Structure Design and Methods: from ADME to Toxicity Optimization by Kerns and Di. Another recent PK-centric book is Lead Optimization for Medicinal Chemists. For getting up to speed in this area, there's Pharmacokinetics Made Easy by Donald Birkett.
In a related field, standard desk references for toxicology seems to be Casarett & Doull's Toxicology: The Basic Science of Poisons and Hayes' Principles and Methods of Toxicology Every medicinal chemist will end up learning a good amount toxicology, too often the hard way.
As mentioned, titles to add to the list are welcome. I'll be doing a post later on less technical general interest science books as well.
+ TrackBacks (0) | Category: Science Gifts
November 27, 2013
Here's a recipe that I'm trying out this year from The Joy of Pickling, an excellent book full of all sorts of pickle recipes. I have a good-sized batch of this going right now, and samples so far confirm that it's good stuff.
1 2 1/2 pound cabbage (1 kilo), shredded
1 tablespoon salt (17 to 18 grams, table or pickling, not kosher, unless you want to adjust the amounts)
1 medium carrot, shredded
1 apple, sliced
1/2 cup cranberries (55g)
1 tablespoon caraway seeds (7g)
Cut the core from the cabbage, save a couple of outer leaves, and shred it. Add the salt to it in a large bowl, mixing it in well and pressing it together. Add the carrot, the apple (cored and sliced into 16th, the book says), the cranberries and the caraway seeds, and mix gently. Place this mixture in some sort of deep crock or jar (jars, if need be). Press the mixture in tight and lay some of the reserved cabbage leaves (or a piece thereof) on top. Weight this down with a small plastic bag (one that's OK for food) full of brine (made from 1.5 tablespoons of salt (24g) in one quart (950 mL) water) - this will keep the cabbage under the liquid layer. If your cabbage was fresh, it should make enough liquid to submerge itself. If not, you can check it after sitting overnight and add some brine (1 tablespoon of salt (18g) in one quart (950 mL) water) to just cover the shredded cabbage.
Leave the jar or jars at room temperature. Twice a day, you'll want to stick a wooden spoon handle down in there a few times to vent the carbon dioxide that will develop. If you don't, especially at first, you're like to have an overflow, so be warned. Four or five days, at a minimum, should do the trick - after that, you can keep it in a cold room or refrigerator. If you ferment it from the start in a cooler room, it'll take longer, but may have even better flavor. According to the Joy of Pickling, the initial burst of gas is from Leuconostoc mesenteroides, which produces good anaerobic conditions for Lactobacillus plantarum, among others, whose acid fermentation products give the sour flavor.
If you like sauerkraut, you'll be very much up for this. If you're not a big kraut fan, have no fear - this is a lot milder and more delicate than the store-bought stuff, and tastes something like rye bread with all that caraway in there. Enjoy!
+ TrackBacks (0) | Category: Blog Housekeeping
I wanted to note that I'm home today, and will soon be starting my traditional chocolate pecan pie. If you haven't seen it, that link will lead you to a detailed prep, with both US and metric measurements. It's based on Craig Claiborne's recipe, and he certainly knew what he was talking about when it came to Southern food (and much else besides). I've been making it for twenty years now, and if I didn't, there would be a mutiny around here.
I have a pumpkin pie to make as well, and I'd like to get the base of the gravy going, so it can be turkey-enhanced tomorrow. (As for the turkey, for some years now we've bought a kosher one, so it's already been brined. A 17-pound specimen is waiting for tomorrow efforts). I hope to also make some green beans with country ham, since that reheats just fine, and will save on stove space tomorrow. For country ham, I can recommend Burger's from the Ozarks, available through that Amazon link. Pan-fried country ham has been my traditional Christmas breakfast for my entire life, and my wife and kids now join in with me on that one, but I break it out for Thanksgiving with the green beans. For me, it's wintertime food - I wouldn't turn it down if someone served it to my in July, but it certainly would be a new experience. I grew up eating a brand called Mar-Tenn from west Tennessee, but I don't even know if they exist any more.
The rest of the Thanksgiving meal will include an Iranian basmati rice (with saffron, slivered almonds, sour dried zereshk berries, pistachios, and bits of orange zest), home-made mashed potatoes, creamed onions with sage, pan-roasted Brussels sprouts, and stuffing (my Iranian mother-in-law's own recipe, with bread cubes, cranberries, celery, onion, and pepperoni - how she thought that one up, I don't know, but it's excellent). And this year I'm trying out some Russian sour cabbage (with apples, cranberries, and caraway seeds), which is fermenting away in the basement right now. I'll post the recipe for that later on in the afternoon, after I've made some culinary headway. Update: forgot the stuffed mushrooms and the roasted acorn squash. It's hard to keep track of it all after a certain point!
+ TrackBacks (0) | Category: Blog Housekeeping
As everyone will have heard the personal-genomics company 23 and Me was told by the FDA to immediately stop selling their product, a direct-to-consumer DNA sequence readout. Reaction to this has been all over the map. I'll pick a couple of the viewpoints to give you the idea.
From one direction, here's Matthew Herper's article, with the excellent title "23 And Stupid". Here's his intro, which makes his case well:
I’d like to be able to start here by railing against our medical system, which prevents patients from getting data about our own bodies because of a paternalistic idea that people can’t look at blood test results, no less genetic information, without a doctor being involved or the government approving the exact language of the test. I’d like to be able to argue that the Food and Drug Administration is wantonly standing in the way of entrepreneurism and innovation by cracking down on 23andMe, a company that is just trying to give patients the ability to know about their own DNA, to understand their own health risks, and to participate in science.
I wish that was the story I’m about to write, but it’s not, and it all really comes down to one fact in the FDA’s brutally scathing warning letter to 23andMe, the Google-backed personal genetics startup. It’s this quote from the letter by Ileana Elder, in the agency’s diagnostics division: “ FDA has not received any communication from 23andMe since May.”
So we can call that one the practical view: "It doesn't matter what you think about 23 and Me's product, and it doesn't matter what you think about the FDA. They're supposed to be working with the FDA, they knew it, but they haven't done squat about it, so what did you expect the agency to do, anyway?". From that, let's go to the idealistic view, from economist Alex Tabarrok at Marginal Revolution, who writes just the sort of article that Herper deliberately passes up the chance to:
Let me be clear, I am not offended by all regulation of genetic tests. Indeed, genetic tests are already regulated. To be precise, the labs that perform genetic tests are regulated by the Clinical Laboratory Improvement Amendments (CLIA) as overseen by the CMS (here is an excellent primer). The CLIA requires all labs, including the labs used by 23andMe, to be inspected for quality control, record keeping and the qualifications of their personnel. The goal is to ensure that the tests are accurate, reliable, timely, confidential and not risky to patients. I am not offended when the goal of regulation is to help consumers buy the product that they have contracted to buy.
What the FDA wants to do is categorically different. The FDA wants to regulate genetic tests as a high-risk medical device that cannot be sold until and unless the FDA permits it be sold.
Moreover, the FDA wants to judge not the analytic validity of the tests, whether the tests accurately read the genetic code as the firms promise (already regulated under the CLIA) but the clinical validity, whether particular identified alleles are causal for conditions or disease. The latter requirement is the death-knell for the products because of the expense and time it takes to prove specific genes are causal for diseases. Moreover, it means that firms like 23andMe will not be able to tell consumers about their own DNA but instead will only be allowed to offer a peek at the sections of code that the FDA has deemed it ok for consumers to see.
The thing is, I can see merits in both these views. And you know, they're not mutually exclusive, either, not as much as it looks like at first glance. I don't even think that the FDA itself thinks that they're so mutually exclusive, if you read their letter (emphasis added):
The Office of In Vitro Diagnostics and Radiological Health (OIR) has a long history of working with companies to help them come into compliance with the FD&C Act. Since July of 2009, we have been diligently working to help you comply with regulatory requirements regarding safety and effectiveness and obtain marketing authorization for your PGS device. FDA has spent significant time evaluating the intended uses of the PGS to determine whether certain uses might be appropriately classified into class II, thus requiring only 510(k) clearance or de novo classification and not PMA approval, and we have proposed modifications to the device’s labeling that could mitigate risks and render certain intended uses appropriate for de novo classification. Further, we provided ample detailed feedback to 23andMe regarding the types of data it needs to submit for the intended uses of the PGS. As part of our interactions with you, including more than 14 face-to-face and teleconference meetings, hundreds of email exchanges, and dozens of written communications, we provided you with specific feedback on study protocols and clinical and analytical validation requirements, discussed potential classifications and regulatory pathways (including reasonable submission timelines), provided statistical advice, and discussed potential risk mitigation strategies. As discussed above, FDA is concerned about the public health consequences of inaccurate results from the PGS device; the main purpose of compliance with FDA’s regulatory requirements is to ensure that the tests work.
As much as I might agree with Alex Tabarrok in principle, I think he's missing a key point here. The FDA is not telling everyone that they don't own their own DNA information, and that they can't see it unless the agency lets them. The agency is saying that 23 and Me can certainly make a business out of selling people their own DNA sequence information, but if they do so by explicitly claiming medical benefits or diagnostic uses, then their business will fall under the FDA's jurisdiction. From their letter, it appears that they have been telling the company this over and over for several years now, during which 23 and Me has, apparently, been dragging their feet and trying to have it both ways. As the FDA letter notes:
For example, your company’s website at www.23andme.com/health (most recently viewed on November 6, 2013) markets the PGS for providing “health reports on 254 diseases and conditions,” including categories such as “carrier status,” “health risks,” and “drug response,” and specifically as a “first step in prevention” that enables users to “take steps toward mitigating serious diseases” such as diabetes, coronary heart disease, and breast cancer.
I'll add a bitter, cynical note: if only 23 and Me had been able to come up with some way to market their DNA test as a nutritional supplement, they'd be in the clear. Maybe some sort of sugar pill that you took before you spit in the little sample container? Then they could say "Not intended to treat, cure, or modify any disease" at the bottom of the page, in six-point microtype, and everything would have been fine, as if by magic. No one would have paid any attention to it, of course, because no one ever pays any attention to that language when they go out and buy all kinds of "supplements", and the FDA would have staggered backwards at the sight of Orrin Hatch's law, like Christopher Lee in a Hammer vampire film being hosed down with a face full of holy water.
Well, that might not have worked perfectly, but it would have worked better than what 23 and Me actually tried. They wouldn't have sold nearly as many DNA tests without talking about preventing disease and making medical decisions in their advertising, true, but those are the breaks. I think that if they'd stuck to some neutral language, rather than presenting Immediate Actionable Medical Decisions, they might well have stayed out of trouble.
Update: via Matt Herper's Twitter feed, here's an interesting take on the whole situation. 23 and Me has been hoping to get some real (and really profitable) insights into population genomics by accumulating such a large sample size. Have they? The way they're acting makes one think that nothing good has popped up yet. . .
+ TrackBacks (0) | Category: Regulatory Affairs
November 26, 2013
One way to look at a drug company's pipeline and portfolio is the "Freshness Index" - how much of its sales are coming from products approved within the past five years. Here's Bernard Munos earlier this year on this topic, where he shows that (too much) revenue lately has been coming from older products. At the time, the figures for the big companies started off with Novartis (19% "fresh" sales), GlaxoSmithKline (12%), J&J (11.8%) and Pfizer (10%).
I bring this up because there's a new look at the freshness index. This one has only products from 2010 or later, and year-to-date sales figures. Under those conditions, it's J&J in the lead (23.4% of sales), then Novartis (17.8%), and Novo Nordisk (13.6%). Now, Novo was not in Munos's list, so I can't say if there's been much of a change there or not, but I find the change in J&J's figures interesting. I don't think that's all due to new approvals - is it older stuff slipping off the list? The new list also has GSK down neat the bottom at 2.3% "fresh", which shows you how much the cutoffs matter to these assessments.
One thing both lists agree on, though, is Eli Lilly. They're at the bottom in both, showing 0.8% of their sales coming from anything approved since 2010. That can't be good, and it isn't. AstraZeneca, Pfizer, Merck, and Sanofi are all in the single digits as well. So's Roche, but their long-running Genentech-driven biotech products make up for that. AZ and Lilly don't exactly have that cushion.
+ TrackBacks (0) | Category: Business and Markets
Here's an article from Science on the problems with mouse models of disease.
or years, researchers, pharmaceutical companies, drug regulators, and even the general public have lamented how rarely therapies that cure animals do much of anything for humans. Much attention has focused on whether mice with different diseases accurately reflect what happens in sick people. But Dirnagl and some others suggest there's another equally acute problem. Many animal studies are poorly done, they say, and if conducted with greater rigor they'd be a much more reliable predictor of human biology.
The problem is that the rigor of animal studies varies widely. There are, of course, plenty of well-thought-out, well-controlled ones. But there are also a lot of studies with sample sizes that are far too small, that are poorly randomized, unblinded, etc. As the article mentions (just to give one example), sticking your gloved hand into the cage and pulling out the first mouse you can grab is not an appropriate randomization technique. They aren't lottery balls - although some of the badly run studies might as well have used those instead.
After lots of agitating and conversation within the National Institutes of Health (NIH), in the summer of 2012 [Shai] Silberberg and some allies went outside it, convening a workshop in downtown Washington, D.C. Among the attendees were journal editors, whom he considers critical to raising standards of animal research. "Initially there was a lot of finger-pointing," he says. "The editors are responsible, the reviewers are responsible, funding agencies are responsible. At the end of the day we said, 'Look, it's everyone's responsibility, can we agree on some core set of issues that need to be reported' " in animal research?
In the months since then, there's been measurable progress. The scrutiny of animal studies is one piece of an NIH effort to improve openness and reproducibility in all the science it funds. Several institutes are beginning to pilot new approaches to grant review. For an application based on animal results, this might mean requiring that the previous work describe whether blinding, randomization, and calculations about sample size were considered to minimize the risk of bias. . .
Not everyone thinks that these new rules are going to work, though, or are even the right way to approach the problem:
Some in the field consider such requirements uncalled for. "I am not pessimistic enough to believe that the entire scientific community is obfuscating results, or that there's a systematic bias," says Joseph Bass, who studies mouse models of obesity and diabetes at Northwestern University in Chicago, Illinois. Although Bass agrees that mouse studies often aren't reproducible—a problem he takes seriously—he believes that's not primarily because of statistics. Rather, he suggests the reasons vary by field, even by experiment. For example, results in Bass's area, metabolism, can be affected by temperature, to which animals are acutely sensitive. They can also be skewed if a genetic manipulation causes a side effect late in life, and researchers try to use older mice to replicate an effect observed in young animals. Applying blanket requirements across all of animal research, he argues, isn't realistic.
I think, though, that there must be some minimum requirements that could be usefully set, even with every field having its own peculiarities. After all, the same variables that Bass mentions above - which are most certainly real ones - could affect studies in completely different fields. This, of course, is one of the biggest reasons that drug companies restrict access to their animal facilities. There's always a separate system to open those doors, and if you don't have the card to do it, you're not supposed to be in there. Pace the animal rights activists, that's not because it's so terrible in there that the rest of us wouldn't be able to take it. It's because they don't want anyone coming in there and turning on lights, slamming doors, sneezing, or doing any of four dozen less obvious things that could screw up the data. This stuff is expensive, and it can be ruined quite easily. It's like waiting for a four-week-long soufflé to rise.
That brings up another question - how do the animal studies done in industry compare to those done in academia? The Science article mentions some work done recently by Lisa Bero of UCSF. She was looking at animal studies on the effects of statins, and found, actually, that industry-sponsored research was less likely to find that the drug under investigation was beneficial. The explanation she advanced is a perfectly good one: if your animal study is going to lead you to spend the big money in the clinic, you want to be quite sure that you can believe the data. That's not to say that there aren't animal studies in the drug industry that could be (or could have been) run better. It's just that there are, perhaps, more incentives to make sure that the answer is right, rather than just being interesting and publishable.
Doesn't the same reasoning apply to human studies? It certainly should. The main complicating factor I can think of is that once a company, particularly a smaller one, has made the big leap into human clinical trials, it also has an incentive to find something that's good enough to keep going with, and/or good enough to attract more investment. So perverse incentives are, I'd guess, more of a problem once you get to human trials, because it's such a make-or-break situation. People are probably more willing to get the bad news from an animal study and just groan and say "Oh well, let's try something else". Saying that after an unsuccessful Phase II trial is something else again, and takes a bit more sang-froid than most of us have available. (And, in fact, Bero's previous work on human trials of statins seems to show various forms of bias at work, although publication bias is surely not the least of them).
+ TrackBacks (0) | Category: Animal Testing
November 25, 2013
Michael Shultz of Novartis is back with more thoughts on how we assign numbers to drug candidates. Previously, he's written about the mathematical wrongness of many of the favorite metrics (such as ligand efficiency), in a paper that stirred up plenty of comment.
His new piece in ACS Medicinal Chemistry Letters is well worth a look, although I confess that (for me) it seemed to end just when it was getting started. But that's the limitation of a Viewpoint article for a subject with this much detail in it.
Shultz makes some very good points by referring to Daniel Kahneman's Thinking, Fast and Slow, a book that's come up several times around here as well (in both posts and comments). The key concept here is called "attribute substitution", which is the mental process by which we take a complex situation, which we find mentally unworkable, and try to substitute some other scheme which we can deal with. We then convince ourselves, often quickly, silently, and without realizing that we're doing it, that we now have a handle on the situation, just because we now have something in our heads that is more understandable. That "Ah, now I get it" feeling is often a sign that you're making headway on some tough subject, but you can also get it when you're understanding something that doesn't help you with it at all.
And I'd say that this is the take-home for this whole Viewpoint article, that we medicinal chemists are fooling ourselves when we use ligand efficiency and similar metrics to try to understand what's going on with our drug candidates. Shultz go on to discuss what he calls "Lipinski's Anchor". Anchoring is another concept out of Thinking Fast and Slow, and here's the application:
The authors of the ‘rules of 5’ were keenly aware of their target audience (medicinal chemists) and “deliberately excluded equations and regression coefficients...at the expense of a loss of detail.” One of the greatest misinterpretations of this paper was that these alerts were for drug-likeness. The authors examined the World Drug Index (WDI) and applied several filters to identify 2245 drugs that had at least entered phase II clinical development. Applying a roughly 90% cutoff for property distribution, the authors identified four parameters (MW, logP, hydrogen bond donors, and hydrogen bond acceptors) that were hypothesized to influence solubility and permeability based on their difference from the remainder of the WDI. When judging probability, people rely on representativeness heuristics (a description that sounds highly plausible), while base-rate frequency is often ignored. When proposing oral drug-like properties, the Gaussian distribution of properties was believed, de facto, to represent the ability to achieve oral bioavailability. An anchoring effect is when a number is considered before estimating an unknown value and the original number signiﬁcantly inﬂuences future estimates. When a simple, specific, and plausible MW of 500 was given as cutoff for oral drugs, this became the mother of all medicinal chemistry anchors.
But how valid are molecular weight cutoffs, anyway? That's a topic that's come up around here a few times, too, as well it should. Comparisons of the properties of orally available drugs across their various stages of development seem to suggest that such measurements converge on what we feel are the "right" values, but as Shultz points out, there could be other reasons for the data to look that way. And he makes this recommendation: "Since the average MW of approved oral drugs has been increasing while the failure rate due to PK/biovailability has been decreasing, the hypothesis linking size and bioavailability should be reconsidered."
I particularly like another line, which could probably serve as the take-home message for the whole piece: "A clear understanding of probabilities in drug discovery is impossible due to the large number of known and unknown variables." I agree. And I think that's the root of the problem, because a lot of people are very, very uncomfortable with that kind of talk. The more business-school training they have, the less they like the sound of it. The feeling is that if we'd just use modern management techniques, it wouldn't have to be this way. Closer to the science end of things, the feeling is that if we'd just apply the right metrics to our work, it wouldn't have to be that way, either. Are both of these mindsets just examples of attribute substitution at work?
In the past, I've said many times that if I had to work from a million compounds that were within rule-of-five cutoffs versus a million that weren't, I'd go for the former every time. And I'm still not ready to ditch that bias, but I'm certainly ready to start running up the Jolly Roger about things like molecular weight. I still think that the clinical failure rate is higher for significantly greasier compounds (both because of PK issues and because of unexpected tox). But molecular weight might not be much of a proxy for the things we care about.
This post is long enough already, so I'll address Shultz's latest thoughts on ligand efficiency in another entry. For those who want more 50,000-foot viewpoints on these issues, though, these older posts will have plenty.
+ TrackBacks (0) | Category: Drug Development | Drug Industry History
November 22, 2013
A look back at the way it used to be, courtesy of ChemTips. What did you do without NMR, without LC-mass spec? You tried all kinds of tricks to get solids that you could recrystallize, and liquids that you could distill. I missed out on that era of chemistry, and most readers here can say the same. But it's a good mental exercise to picture what things used to be like.
+ TrackBacks (0) | Category: Chemical News
Here's a podcast interview I did recently for "Science For the People" (formerly known as "Skeptically Speaking", where they quizzed me about some of the "Things I Won't Work With" compounds. The whole show is worth listening to (there's Scicurious and ZeFrank in there, but I come in at about the 38 minutes mark.
+ TrackBacks (0) | Category: Blog Housekeeping
You've seen those "Call for papers" notices, from journals or conferences? Over at Synthetic Remarks, there's a call for whistleblowers. Reacting to the recent reports of scientific fraud, Fredrik von Kieseritzky is asking those who want to get the details of such things out safely to contact him. Swedish law is very protective of sources, and he's basically making that jurisdiction available to anyone for whom it would be in a position to help. He also has some sound advice on how to communicate such things (Tor, PGP, TrueCrypt), which are the sorts of tools that, apparently, the modern world is trying to make sure that everyone stays current with whether they felt like doing so or not.
This is a sincere offer, and may well draw some sincere responses. We'll see. . .
+ TrackBacks (0) | Category: The Dark Side
November 21, 2013
Here's a very surprising idea that looks like it can be put to an experimental test. Mao-Sheng Miao (of UCSB and the Beijing Computational Sciences Research Center) has published a paper suggesting that under high-pressure conditions, some elements could show chemical bonding behavior involving their inner-shell electrons. Specific predictions include high-pressure forms of cesium fluoride - not just your plain old CsF, but CsF3 and CsF5, and man, do I feel odd writing down those formulae.
These have completely different geometries, and should be readily identifiable should they actually form. I'm thinking of this as cesium giving up its lone valence electron, and then you're left with a xenon-like arrangement. And xenon, as Neil Bartlett showed the world in 1962, can certainly go on to form fluorides. Throw in some pressure, and (perhaps) the deed it done in cesium's case. So I very much look forward to an experimental test of this idea, which I would imagine we'll see pretty shortly.
+ TrackBacks (0) | Category: Chemical News
I wanted to mention to readers here that I've agreed to write a book (for a general audience) on chemistry for Sterling Publishers (the publishing arm of Barnes and Noble). They've been putting out a series of books (Sterling Milestones) on various scientific topics, looking at 250 key concepts or historical events. There's a short essay on each of these, and an illustration on the facing page. Clifford Pickover did The Math Book, The Physics Book, and The Medical Book for them, and recently they've published The Drug Book, The Space Book, and The Psychology Book as well. So I'm doing The Chemistry Book, which occupies me on my train rides home after work and after dinner - my wife and kids have been involuntarily roped in as the test audience for the entries.
The book itself won't be out for a while - I'm delivering the manuscript next spring, and there will surely be a lot of editorial work after that. I have over 200 of the short chapters outlined so far, but I'm leaving some room for more topics as they occur to me (and as the chapters I'm writing suggest - sometimes I find that I have to include another topic to make the one I'm working on make sense to the eventual readers).
I don't want to give away the complete list of chapters just yet, not least because it's still changing around, but I would like to solicit nominations for events and ideas that anyone thinks I should be sure to cover. The book spans the whole historical record, up to the present day, in all fields of chemistry, so in one sense the challenge is narrowing it down to just 250 short essays. The other challenge is actually writing 250 short essays, of course. I'm doing OK against my list so far, but there are some topics that are difficult to do justice to in 350 words, as will be easily appreciated by the chemists around here.
So if anyone has some topics, obvious or nonobvious, that they think a book like this should be sure to include, please mention them in the comments. I'm sure some of them will already be on the list, but since I have room to add more, I certainly don't want to miss too many good opportunities. Thanks very much!
And yes, the "Things I Won't Work With" manuscript is being worked on as well. "The Chemistry Book" is giving me some practice at integrating a longer manuscript, and I've been adding some new material along the way. The trickier part of that one has been getting rid of some repetition that you notice when the original blog posts are stacked up together. But it's definitely in the hopper.
Update: a lot of good ideas in the comments! Many of them were already on my list, but I've already seem some that I wouldn't have thought of, and some others that I really should have but overlooked. Much appreciated! Anyone who hasn't added something and still wants to, though, feel free - I'll be checking this post pretty frequently.
+ TrackBacks (0) | Category: Book Recommendations
November 20, 2013
Double Nobelist Frederick Sanger has died at 95. He is, of course, the pioneer in both protein and DNA sequencing, and he lived to see these techniques, revised and optimized beyond anyone's imagining, become foundations of modern biology.
When he and his team determined the amino acid sequence of insulin in the 1950s, no one was even sure if proteins had definite sequences or not. That work, though, established the concept for sure, and started off the era of modern protein structural studies, whose importance to biology, medicine, and biochemistry is completely impossible to overstate. The amount of work needed to sequence a protein like insulin was ferocious - this feat was just barely possible given the technology of the day, and that's even with Sanger's own inventions and insights (such as Sanger's reagent) along the way. He received a well-deserved Nobel in 1958 for having accomplished it.
In the 1970s, he made fundamental advances in sequencing DNA, such as the dideoxy chain-termination method, again with effects which really can't be overstated. This led to a share of a second chemistry Nobel in 1980 - he's still only double laureate in chemistry, and every bit of that recognition was deserved.
+ TrackBacks (0) | Category: Biological News | Chemical News
There's a report of a new technique to solve protein crystal structures on a much smaller scale than anyone's done before. Here's the paper: the team at the Howard Hughes Medical Institute has used cryo-electron microscopy to do electron diffraction on microcrystals of lysozyme protein.
We present a method, ‘MicroED’, for structure determination by electron crystallography. It should be widely applicable to both soluble and membrane proteins as long as small, well-ordered crystals can be obtained. We have shown that diffraction data at atomic resolution can be collected and a structure determined from crystals that are up to 6 orders of magnitude smaller in volume than those typically used for X-ray crystallography.
For difficult targets such as membrane proteins and multi-protein complexes, screening often produces microcrystals that require a great deal of optimization before reaching the size required for X-ray crystallography. Sometimes such size optimization becomes an impassable barrier. Electron diffraction of microcrystals as described here offers an alternative, allowing this roadblock to be bypassed and data to be collected directly from the initial crystallization hits.
X-ray diffraction is, of course, the usual way to determine crystal structures. Electrons can do the same thing for you, but practically speaking, that's been hard to realize in a general sense. Protein crystals don't stand up very well to electron beams, particularly if you crank up the intensity in order to see lots of diffraction spots. Electrons interact strongly with atoms, which is nice, because you don't need as big a sample to get diffraction, but they interact so strongly that things start falling apart pretty quickly. You can collect more data by zapping more crystals, but the problem is that you don't know how these things are oriented relative to each other. That leaves you with a pile of jigsaw-puzzle diffraction data and no easy way to fit it together. So the most common application for protein electron crystallography has been for samples that crystallize in a thin film or monolayer - that way, you can continue collecting diffraction data while being a bit more sure that everything is facing in the same direction.
In this new technique, the intensity of the electron beam is turned down greatly, and the crystal itself is precisely rotated through 90 one-degree increments. The team has developed methods to handle the data and combine it into a useful set, and were able to get a 2.9-angstrom resolution on lysozyme crystals that are (as described above) far smaller than the usual standard for X-ray work, as shown. There's been a lot of work over the years to figure out how low you can set the electron intensity and still get useful data in such experiments, and this work started off by figuring out how much total radiation the crystals could stand and dividing that out into portions.
The paper, commendably, has a long section detailing how they tried to check for bias in their structure models, and the data seem pretty solid, for what that's worth coming from a non-crystallographer like me. This is still a work in progress, though - lysozyme is about the easiest example possible, for one thing. The authors describe some of the improvements in data collection and handling that would help make this a regular structural biology tool, and I hope that it does so. There's a lot of promise here - being able to pull structures out of tiny "useless" protein crystals would be a real advance.
+ TrackBacks (0) | Category: Analytical Chemistry
November 19, 2013
The Journal of Biomolecular Screening has a new issue devoted to phenotypic and functional screening approaches, and there looks to be some interesting material in there. The next issue will be Part II (they got so many manuscripts that the intended single issue ran over), and it all seems to have been triggered by the 2011 article in Nature Reviews Drug Discovery that I blogged about here. The Society for Laboratory Automation and Screening set up a special interest group for phenotypic drug discovery after that paper came out, and according to the lead editorial in this new issue, it quickly grew to become the largest SIG and one of the most active.
The reason for this might well be contained in the graphic shown, which is based on data from Bernard Munos. I'm hoping that those historical research spending numbers have been adjusted for inflation, but I believe that they have (since they were in Munos's original paper).
There's an update to the original Swinney and Anthony NRDD paper in this issue, too, and I'll highlight that in another post.
+ TrackBacks (0) | Category: Drug Assays | Drug Industry History
I wrote here about the recent article by John Bohannon in Science, where he submitted a clearly substandard article to a long list of open-access publishers. (The results reflected poorly on many of them). Now here's a follow-up interview with Bohannon at The Scholarly Kitchen, where he addresses many of the critiques of the piece. Well worth a read if this issue interests you.
Q: What has been the response of editors and publishers? Have any journals ceased publication? Have any editors/editorial board members resigned in protest? Do any of them blame you, personally, for the outcomes? Have any threats (legal or otherwise) been made towards you or Science Magazine as a result of the exposé?
A couple of weeks before the story was published, I contacted editors and publishers specifically named in the story. Their responses are printed in the article, ranging from falling on their own sword and accepting responsibility to blaming the publishers and claiming they were not involved with the journal. But since the story was published, editors and publishers have largely fallen silent.
One exception is an editor based in the Middle East who says that the sting has cost him his job. It pains me to hear that. But then again, he clearly wasn’t doing his job.
As far as I can tell, it has been business as usual for the 157 publishers that got stung. I know of only one fully confirmed closure of a journal (as reported in Retraction Watch). There have been statements by publishers that they intend to close a journal, but I’ll believe it when I see it.
Of course, closing a single journal is nothing but a pinprick for many of the publishers that accepted the fake paper. Most publish dozens–some, hundreds–of titles.
I was bracing myself for lawsuits and PR blitzes from the publishers and editors that got stung. Ironically, the attacks came instead from advocates of the open access movement.
+ TrackBacks (0) | Category: The Scientific Literature
November 18, 2013
I don't know how many readers out there use the cLogP function in ChemDraw, but you might want to take a look at the illustration here before you use it again. A reader alerted me to this glitch: drawing in explicit hydrogens sends it into an even stranger world of fantasy than most calculated logP values inhabit. There seems to be a limit, though, as that cyclopropane series illustrates. (Note that the "logP" function resists temptation).
PerkinElmer says that there are several logP calculating functions built in, but that cLogP is from BioByte. I don't think that they got their money's worth from them. Now, this isn't the first glitch like this I've seen - a lot of chemical drawing and searching programs choke on details here and there. But this is the sort of thing you'd have thought would have been fixed by now.
Update: it has indeed been fixed, according to the fast-acting Phillip Skinner at PerkinElmer. But apparently many of us out in user-land aren't using updated versions. For what it's worth, I have ChemBioDraw Ultra 13.02 v. 3020 - are others getting more reliable numbers with newer software?
Note: here's where the patch can be downloaded.
+ TrackBacks (0) | Category: In Silico
I'm hearing reports of re-organization and worse at Shire's UK R&D today (Basingstoke). There had been reports in the press earlier this month, but things seem to be underway. The medicinal chemistry situation in the UK is surely the worst it's been in living memory. . .
+ TrackBacks (0) | Category: Business and Markets
November 15, 2013
I wrote here about chemistry sets as gifts for science-minded youngsters, and at the time, the only recommendation I could make was the Thames and Kosmos line (which are definitely still worth a look). A reader sends along another possibility, though: the Heirloom Chemistry Set, a deliberate attempt to recreate the classic sets of 50 or 60 years ago. It's not cheap, but it certainly looks like the real deal. This part, though, is cause for concern:
Regarding equipment, while we have shipped custom chemistry sets (both chemicals and equipment) to customers in each of the 50 states for the past 10 years it needs to be noted that some states do frown on its citizens owning chemical glassware. We recommend that if this is a concern to you that you contact your state and/or local authorities to ascertain what may be allowed.
Does anyone have more detail on this? Can you really get in trouble for owning Erlenmeyer flasks, beakers, and graduated cylinders, the kind of everyday chemical glassware sold with this set? I'm pretty sure that most backyard methamphetamine jockeys don't bother with decent glassware, you know?
+ TrackBacks (0) | Category: Science Gifts
I wrote here about Zafgen and their covalent Met-Ap2 inhibitor beloranib. Word is out today that the compound has passed its first Phase II trial handily, so score one for covalent epoxides as drug candidates.
Zafgen has followed up promising results from early-stage work on its weight drug beloranib with a stellar Phase II study that tracked rapid weight loss among the severely obese, with one group shedding an average of 22 pounds in 12 weeks. CEO Tom Hughes says the mid-stage success clears a path to a Phase IIb trial that can fine tune the dose while taking more time to gauge the longterm impact of its treatment on weight. And the data harvest sets the right tone for ongoing talks with investors about a new financing round for the biotech.
Efficacy, though, doesn't seem to have been in much doubt with this compound. Phase III will be the big one, because the worry here will be some sort of funny longer-term toxicity. No one's quite sure what inhibiting that enzyme will do (other than this pretty impressive weight loss), and a covalent drug (even a relatively benign and selective one like an epoxide) is always going to have questions around it until it's proven itself in human tox. But so far, so good.
One thing that beloranib has going for it is that patients would presumably take for a relatively limited course of therapy and then try to keep the weight off on their own. That's a big distinction, toxicologically. On one end of the spectrum, you've got your one-time-use drugs, like an anesthetic, and then there are the anti-infectives that you might take for two weeks or (at most) a few months. But at the other end, you have the cardiovascular and diabetes drugs that your patient population is going to be taking every morning for the rest of their lives, and the safety profile is clearly going to have to clearer in those cases.
Critics of the industry never fail to mention that we, supposedly, are not looking for cures, but rather for drugs in that latter category so we can reap the big, big profits. They haven't thought this through well enough: for one thing, a cure is worth more money up front. And there is that tiny little factor of patent lifetime. To hear some people talk, you'd think that a drug's discoverers continue to reap the gains forever, but it ain't so. Ask Eli Lilly right now how that's going - most of their revenue is in the process of packing up and leaving for the generics companies. It doesn't matter if a company finds a drug that people need to take for fifty years; they're not going to be selling it that long.
Back to Zafgen, though. They've got an interesting program going here, and I'm very curious to see how it works out. Going after obesity from the metabolic end is something that a lot of people have tried, through various mechanisms, but it's still probably a better bet than trying to affect appetite. And I'll be glad to see an epoxide-based drug prove itself in the clinic, because I think that evidence suggests that they're better drug candidates than we give them credit for (see the link in the first line of this post for more on that). We medicinal chemists need all the options we can get. From the way things look, I'd bet on beloranib going fine through the rest of Phase II - and then begins the finger-crossing and rabbit's-footing.
+ TrackBacks (0) | Category: Clinical Trials | Diabetes and Obesity
November 14, 2013
If you read the publications on the GSK compound (darapladib) that just failed in Phase III, you may notice something odd. These mention "odor" as a side effect in the clinical trial subjects. Say what?
If you look at the structure, there's a para-fluorobenzyl thioether in there, and I've heard that this is apparently not oxidized in vivo (a common fate for sulfides). That sends potentially smelly parent compound (and other metabolites?) into general circulation, where it can exit in urine and feces and even show up in things like sweat and breath. Off the top of my head, I can't think of another modern drug that has a severe odor liability. Anyone have examples?
Update: plenty of examples in the comments!
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Pharmacokinetics
A reader sent along a note about some of the recent layoffs we've been seeing across pharma/biotech companies. Different organizations have very different ways of handling things like stock units and options, and in a layoff one of the key variables is the vesting schedule. Remember, if you have compensation of this sort that hasn't vested, it isn't yours. It may show up in your statements (albeit under its own category), and you will have seen the announcements of it being awarded to you, but until this stuff vests that's all provisional.
I think that most people realize that part, although if you don't, it's going to come as a nasty surprise. What seems to have taken some people unaware recently, though, is that companies have different schedules for vesting the employer match contributions in 401(k) accounts. Some companies vest immediately, but those are probably not a majority. The rest follow different schedules, and if you are laid off, that unvested money is going to disappear unless the company specifically changes its mind about that.
Now that said, there's not much you can do about this, but it's worth knowing the real situation in advance so you can plan accordingly. Seeing money disappear from their retirement accounts was something of a shock to people in these recent layoffs, from what I'm told, making a bit situation just that much more painful.
+ TrackBacks (0) | Category: Business and Markets
The topic of "stack ranking" came up around here recently, so I wanted to pass on some news in this area. Microsoft, one of the most notorious practitioners, has apparently decided to stop. It's not clear to me just what they're replacing it with, but whatever system they're trying out could hardly be more demoralizing than what it's replacing. Drawing raises and bonuses out of a bingo cage would be preferable - at least then people wouldn't be at each other's throats.
Even if all members of a team performed well that year, the manager was required to designate a set percentage as underperformers — a practice that drew fire from employees. Many thought the system rewarded internal politicking, withholding of information and back-stabbing, rather than rewarding innovation or cooperation.
That review system has been blamed by some for causing Microsoft to fall behind other tech companies in the past decade in key areas such as mobile computing.
In an email to employees Tuesday, Lisa Brummel, Microsoft’s head of human resources, wrote that the new review process will now have “no more curve.”
“We will continue to invest in a generous rewards budget, but there will no longer be a predetermined targeted distribution,” she wrote. “Managers and leaders will have flexibility to allocate rewards in the manner that best reflects the performance of their teams and individuals, as long as they stay within their compensation budget.”
In addition, she wrote, there will be “no more ratings. This will let us focus on what matters — having a deeper understanding of the impact we’ve made and our opportunities to grow and improve.”
It's a constant struggle, figuring out how to reward people without providing perverse incentives. And it's a problem with a lot of range, too, when you consider how it applies to businesses, schools, welfare policy, and many other areas. It's probably a safe bet that there's no general solution, but that doesn't give people an excuse to stick with something that's worse than neutral. The "two man enter, one man leave" aspect of a rank-and-yank system is definitely that. (As a side issue, it also assumes that the rating process is accurate enough to make calls like this, which is very much debatable in many organizations). Maybe Microsoft will start a trend here. . .
+ TrackBacks (0) | Category: Business and Markets
November 13, 2013
Good news is always welcome. And I can report today that longtime infomercial pitchman Kevin Trudeau has been jailed on criminal contempt charges, and has another judge breathing down his neck and ordering him to pay $38 million dollars in restitution. This will put me in a good mood for the rest of the day.
Jurors took less than an hour to find Trudeau, 50, guilty of violating a 2004 federal court settlement with the Federal Trade Commission that barred him from misrepresenting the contents of his books in advertisements, said Randall Samborn, a spokesman for the U.S. Attorney's Office in Chicago.
Trudeau, who was jailed twice in recent months for civil contempt by a different federal judge in Chicago, faces potential prison time for the criminal contempt conviction in the trial before U.S. District Judge Ronald Guzman.
Prosecutors had argued Trudeau knowingly violated the 2004 agreement while marketing his book, "The Weight Loss Cure 'They' Don't Want You To Know About," in infomercials made in 2006 and 2007 that aired about 32,000 times.
In part, Trudeau told viewers in the infomercials that the "cure" to obesity was not a diet and did not require exercise, but the book instructed readers to walk an hour each day and to limit intake to 500 calories.
I've looked in on Trudeau's all-natural pharma-is-killing-you hoo-hah several times over the years, and in 2008 I hoped for jail time in his future. And here we are, at long last. A sleazy, heartless charlatan who has defrauded gullible customers out of uncounted millions of dollars is finally being dragged off to the slammer by deputies, and I hope they're wearing gloves. About damn time. But today, let us celebrate.
+ TrackBacks (0) | Category: Snake Oil
I briefly mentioned Sarepta and etiplirsen, their proposed therapy for Duchenne muscular dystrophy (DMD) in September. In that post, I made reference to the "delirious fun of investing in biotech". Well, the company recently got some regulatory news that illustrates that point even more clearly. The FDA told Sarepta that it would not get accelerated approval for the drug, and that sent the stock into a mineshaft (and infuriated the DMD community, as you might well think).
Matthew Herper at Forbes has some good background on the story here Etiplirsen is one of these drugs aimed at a small market (one particular DMD mutation). And the clinical data were pretty thin:
It can be hard to imagine saying no to a plea like that – but sometimes that is the FDA’s job. As one muscular dystrophy expert told me when I wrote about Sarepta’s results earlier this year, it was always possible that it might be “too much to hope for” to think that eteplirsen could be approved based on the data so far. Eteplirsen was studied in only twelve boys, half of whom received the medicine immediately, the other half of whom initially got placebo but then switched to taking the drug. Those who started on the medicine earlier have higher levels of dystrophin, at least according to muscle biopsies, and appeared to be able to walk a greater distance in six minutes, a sign that their muscles are deteriorating less quickly.
Unfortunately, great results from small trials have a history of not bearing out in larger studies. Even for rare disease drugs, this study was tiny. Worse, the Sarepta results only look good when two of the 12 patients are excluded – two boys were too sick to be helped by the drug. The FDA usually insists that clinical trials be presented in what is known as an “intent-to-treat” analysis, which means that if you even thought about treating a patient they need to be included when you do the math on the study’s results. This is intended to keep scientists from lying to themselves, convincing themselves that a drug works when it doesn’t. One biotech executive with a great deal of experience in rare diseases told me recently that this issue meant the data “would never fly” with the FDA. The recent failure of a similar, but less effective, drug from Prosensa and GlaxoSmithKline GSK -1.64% made the odds dimmer.
And Adam Feuerstein of TheStreet.com, who thought that the company would get the accelerated designation, has a look at the decision here. He spoke with a bearish investor who made this case:
The FDA's issues with trial design are so wide-ranging that it seems like wishful thinking that Sarepta will be able to agree on a study design and start enrolling by the second quarter 2014.
Major questions with dystrophin quantitative assay. Questions with results of anything less than two years. Need for a larger study to power the six-minute walk test (6MWT) data. Possible need to expand study population both high and low and go beyond 6MWT as primary endpoint. The FDA is very deeply skeptical and Sarepta will have a difficult time coming to a study design that the company thinks they can do and that the FDA will be satisfied with.
And any trial seems likely to last 2 years. Seems to me that even if all goes well, approval would get pushed out much more than two years. They're going to spend 9 months arguing over study design and probably won't start enrolling until early 2015. Two-year trial plus filing and approval. Sounds like early 2018 approval at best.
I have to say, this is consistent with worries resulting from the Prosensa trial that the market ignored. But it's actually even more negative that I expected. I thought the FDA would just say, do a larger trial along the same lines. What they're saying is much more confused than that. What is a valid marker and what do you need to get the data to support it?
It makes you wonder whether the FDA really changed their minds lately or if Sarepta misrepresented (through wishful thinking or worse) what the FDA had been telling them all along.
The agency really did made things much trickier than most people had been expecting. They're talking about completely new endpoints, rather than just shoring up the data collected so far (which was already more than the company's boosters were willing to think about, in some cases). I don't envy the folks at the FDA, but then, I never do. They get to look like heartless bureaucrats, bleating about numbers while children are suffering. The flip side, though, is trying to keep people from raising their hopes for something that does no good. If we approve things that just look as if they might work, all sorts of charlatans will rush in, human nature being what it is.
But we're not talking about approval here, just an accelerated protocol for it. Surely they could have at least agreed to fast-track this one? I think, though, that the FDA saw itself being put into an untenable position. They did not think that there was enough solid evidence to approve the drug as it stood, and accelerated approval would ensure that no more was going to be forthcoming. All that would do would be to get everyone's hopes up even more, for what still would looked very much like a rejection based on insufficient data. And in that case, why not tell the company now and get it over with?
+ TrackBacks (0) | Category: Clinical Trials | Regulatory Affairs
November 12, 2013
Well, yesterday Reuters had a preview of GlaxoSmithKline's expected release of Phase III data on their phospholipase A2 inhibitor darapladib:
In theory, darapladib could become a $10 billion-a-year seller, industry analysts believe, making it GSK's biggest-ticket pipeline bet.
In practice, there are major doubts about its prospects, after mixed evidence to date, and current consensus forecasts point to annual sales of only $605 million in 2018, according to Thomson Reuters Pharma.
Barclays analysts see just a 10 percent probability of the drug succeeding, which they say points to a potential 12 percent boost to GSK's valuation if Phase III trial results are positive, with a modest 2 percent downside if it fails.
We'll now see how well that last forecast works out, because the Phase III data are out today, and things don't look good. The compound missed its primary endpoint. GSK picked this one up by acquiring Human Genome Sciences, in a move that doesn't seem quite as slick as it might have once.
But that's hindsight - there's no way to know if a new cardiovascular drug works without recruiting thousands and thousands of people into a huge Phase III trial, and that's just what GSK did here (16,000 patients!) And there's another 13,000 patient trial still waiting to report. How much money has been spent, we'll never know, but it's a lot. Maybe that alone is reason enough to stay out of that therapeutic area - other companies have come to just that conclusion here and there.
Update: in response to some of the comments, I've changed the title and some text of this post. There was one primary endpoint in this trial, but it was a composite: time to first occurrance(s) of any major cardiovascular event: MI (heart attack), stroke, or "cardiovascular death". And it's true that the company is saying that the compound did make some of its secondary endpoints, but until the data are presented, we don't know which ones, nor how important they are. There's no way that this can be anything but bad news, however.
+ TrackBacks (0) | Category: Clinical Trials
Here's the (edited) transcript of an interview that Pfizer's VP of clinical research, Charles Knirsch, gave to PBS's Frontline program. The subject was the rise of resistant bacteria - which is a therapeutic area that Pfizer is no longer active in.
And that's the subject of the interview, or one of its main subjects. I get the impression that the interviewer would very much like to tell a story about how big companies walked away to let people die because they couldn't make enough money off of them:
. . .If you look at the course of a therapeutic to treat pneumonia, OK, … we make something, a macrolide, that does that. It’s now generic, and probably the whole course of therapy could cost $30 or $35. Even when it was a branded antibiotic, it may have been a little bit more than that.
So to cure pneumonia, which in some patient populations, particularly the elderly, has a high mortality, that’s what people are willing to pay for a therapeutic. I think that there are differences across different therapeutic areas, but for some reason, with antibacterials in particular, I think that society doesn’t realize the true value.
And did it become incumbent upon you at some point to make choices about which things would be in your portfolio based on this?
Based on our scientific capabilities and the prudent allocation of capital, we do make these choices across the whole portfolio, not just with antibacterials.
But talk to me about the decision that went into antibacterials. Pfizer made a decision in 2011 and announced the decision. Obviously you were making choices among priorities. You had to answer to your shareholders, as you’ve explained, and you shifted. What went into that decision?
I think that clearly our vaccine platforms are state of the art. Our leadership of the vaccine group are some of the best people in the industry or even across the industry or anywhere really. We believe that we have a higher degree of success in those candidates and programs that we are currently prosecuting.
So it’s a portfolio management decision, and if our vaccine for Clostridium difficile —
Yeah, a bacteria which is a major cause of both morbidity and mortality of patients in hospitals, the type of thing that I would have been consulted on as an infectious disease physician, that in fact we will prevent that, and we’ll have a huge impact on human health in the hospitals.
But did that mean that you had to close down the antibiotic thing to focus on vaccines? Why couldn’t you do both?
Oh, good question. And it’s not a matter of closing down antibiotics. We were having limited success. We had had antibiotics that we would get pretty far along, and a toxicity would emerge either before we even went into human testing or actually in human testing that would lead to discontinuation of those programs. . .
It's that last part that I think is insufficiently appreciated. Several large companies have left the antibiotic field over the years, but several stayed (GlaxoSmithKline and AstraZeneca come to mind). But the ones who stayed were not exactly rewarded for their efforts. Antibacterial drug discovery, even if you pour a lot of money and effort into it, is very painful. And if you're hoping to introduce a mechanism of action into the field, good luck. It's not impossible, but if it were easy to do, more small companies would have rushed in to do it.
Knirsch doesn't have an enviable task here, because the interviewer pushes him pretty hard. Falling back on the phrase "portfolio management decisions" doesn't help much, though:
In our discussion today, I get the sense that you have to make some very ruthless decisions about where to put the company’s capital, about where to invest, about where to put your emphasis. And there are whole areas where you don’t invest, and I guess the question we’re asking is, do you learn lessons about that? When you pulled out of Gram-negative research like that and shifted to vaccines, do you look back on that and say, “We learned something about this”?
These are not ruthless decisions. These are portfolio decisions about how we can serve medical need in the best way. …We want to stay in the business of providing new therapeutics for the future. Our investors require that of us, I think society wants a Pfizer to be doing what we do in 20 years. We make portfolio management decisions.
But you didn’t stay in this field, right? In Gram negatives you didn’t really stay in that field. You told me you shifted to a new approach.
We were not having scientific success, there was no clear regulatory pathway forward, and the return on any innovation did not appear to be something that would support that program going forward.
Introducing the word "ruthless" was a foul, and I'm glad the whistle was blown. I might have been tempted to ask the interviewer what it meant, ruthless, and see where that discussion went. But someone who gives in to temptations like that probably won't make VP at Pfizer.
+ TrackBacks (0) | Category: Drug Development | Drug Industry History | Infectious Diseases
Nature Biotechnology is making it known that they're open to publishing studies with negative results. The occasion is their publication of this paper, which is an attempt to replicate the results of this work, published last year in Cell Research. The original paper, from Chen-Yu Zhang of Nanjing University, reported that micro-RNAs (miRNAs) from ingested plants could be taken up into the circulation of rodents, and (more specifically) that miRNA168a from rice could actually go on to modulate gene expression in the animals themselves. This was a very interesting (and controversial) result, with a lot of implications for human nutrition and for the use of transgenic crops, and it got a lot of press at the time.
But other researchers in the field were not buying these results, and this new paper (from miRagen Therapeutics and Monsanto) reports that they cannot replicated the Nanjing work at all. Here's their rationale for doing the repeat:
The naturally occurring RNA interference (RNAi) response has been extensively reported after feeding double-stranded RNA (dsRNA) in some invertebrates, such as the model organism Caenorhabditis elegans and some agricultural pests (e.g., corn rootworm and cotton bollworm). Yet, despite responsiveness to ingested dsRNA, a recent survey revealed substantial variation in sensitivity to dsRNA in other Caenorhabditis nematodes and other invertebrate species. In addition, despite major efforts in academic and pharmaceutical laboratories to activate the RNA silencing pathway in response to ingested RNA, the phenomenon had not been reported in mammals until a recent publication by Zhang et al. in Cell Research. This report described the uptake of plant-derived microRNAs (miRNA) into the serum, liver and a few other tissues in mice following consumption of rice, as well as apparent gene regulatory activity in the liver. The observation provided a potentially groundbreaking new possibility that RNA-based therapies could be delivered to mammals through oral administration and at the same time opened a discussion on the evolutionary impact of environmental dietary nucleic acid effects across broad phylogenies. A recently reported survey of a large number of animal small RNA datasets from public sources has not revealed evidence for any major plant-derived miRNA accumulation in animal samples. Given the number of questions evoked by these analyses, the limited success with oral RNA delivery for pharmaceutical development, the history of safe consumption for dietary small RNAs and lack of evidence for uptake of plant-derived dietary small RNAs, we felt further evaluation of miRNA uptake and the potential for cross-kingdom gene regulation in animals was warranted to assess the prevalence, impact and robustness of the phenomenon.
They believe that the expression changes that the original team noted in their rodents were due to the dietary changes, not to the presence of rice miRNAs, which they say that they cannot detect. Now, at this point, I'm going to exit the particulars of this debate. I can imagine that there will be a lot of hand-waving and finger-pointing, not least because these latest results come partly from Monsanto. You have only to mention that company's name to an anti-GMO activist, in my experience, to induce a shouting fit, and it's a real puzzle why saying "DeKalb" or "Pioneer Hi-Bred" doesn't do the same. But it's Monsanto who take the heat. Still, here we have a scientific challenge, which can presumably be answered by scientific means: does rice miRNA get into the circulation and have an effect, or not?
What I wanted to highlight, though, is another question that might have occurred to anyone reading the above. Why isn't this new paper in Cell Research, if they published the original one? Well, the authors apparently tried them, only to find their work rejected because (as they were told) "it is a bit hard to publish a paper of which the results are largely negative". That is a silly response, verging on the stupid. The essence of science is reproducibility, and if some potentially important result can't be replicated, then people need to know about it. The original paper had very big implications, and so does this one.
Note that although Cell Research is published out of Shanghai, it's part of the Nature group of journals. If two titles under the same publisher can't work something like this out, what hope is there for the rest of the literature? Congratulations to Nature Biotechnology, though, for being willing to publish, and for explicitly stating that they are open to replication studies of important work. Someone should be.
+ TrackBacks (0) | Category: Biological News | The Scientific Literature