About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: email@example.com
In the Pipeline:
Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline
July 2, 2015
Oh, man. Here's another example of an old, sad story - just a little fakery at the beginning, and here's what it leads to:
Government prosecutors said (Dong-Pyou) Han's misconduct dates to 2008 when he worked at Case Western Reserve University in Cleveland under professor Michael Cho, who was leading a team testing an experimental HIV vaccine on rabbits. Cho's team began receiving NIH funding, and he soon reported the vaccine was causing rabbits to develop antibodies to HIV, which was considered a major breakthrough. Han said he initially accidentally mixed human blood with rabbit blood making the potential vaccine appear to increase an immune defense against HIV, the virus that can cause AIDS. Han continued to spike the results to avoid disappointing Cho, his mentor, after the scientific community became excited that the team could be on the verge of a vaccine.
He's now been sentenced to 4 1/2 years in prison for faking research reports, and to repay the NIH $7.2 million in misused grant money. This was an extensive program of faked results (see this post at Retraction Watch from 2013, when the Office of Research Integrity made its report on the case). This went on for years, with the results - presented at multiple conferences in the field - being the basis for an entire large research program.
How someone ends up in this position, that's what you wonder. But it's a classic mistake. Fred Schwed, in Where Are the Customer's Yachts?, laid out the equivalent situation in investing. I don't have the exact quote to hand, but it was something like "They got on the train at Grand Central Station - they were just going uptown to visit Grandma. But the next thing they knew, they were making 80 miles an hour, at midnight, through Terre Haute, Indiana". In a more somber key, Macbeth experiences the same feeling in Act 3, scene 4: "I am in blood. Stepped in so far that, should I wade no more, returning were as tedious as go o'er." It's such an old trap that you'd think that people would be looking out for it more alertly, but I supposed that the people who fall into it never think that it'll happen to them. . .
+ TrackBacks (0) | Category: Infectious Diseases | The Dark Side
In case you were wondering, you can add "MAO-B inhibition" to the long, long list of Things That Don't Do Any Good For Alzheimer's. I'm not sure how much hope anyone had for that program (at either Roche or Evotec), but the potential payoff is so huge that a lot of marginal ideas get tried. At least this was in Phase II, and not Phase III; there's always that. . .
+ TrackBacks (0) | Category: Alzheimer's Disease
Chris Viehbacher, ex-Sanofi, has reappeared at a $2 billion dollar biotech fund.
Viehbacher is clear, though, that Gurnet will be founding companies as well as looking outside the red-hot fields like oncology. To find value these days, you have to look outside of the trendiest fields, he says. And you're also not going to find much in the way of innovation at huge companies like Sanofi.
"My conclusion is that you can't have truly disruptive thinking inside big organizations," says Viehbacher. "Everything about the way a big organization is designed is about eliminating disruption."
In Viehbacher's view, Big Pharma is still trying to act in the way the old movie studios once operated in Hollywood, with everyone from the stars to writers and stunt men all roped into one big group. Today, he says, movie studios move from project to project, and virtually everyone is a freelancer. In biopharma, he adds, value is found in specializing, and "fixed costs are your enemy."
He's right about that disruption problem at big companies, although he raised eyebrows when he said something similar while still employed at a big company. (Sanofi tried to put those comments in the ever-present "broader context" here). A large organization has its own momentum, but even if its magnitude is decent, its vector is pointed in the direction of keeping things the way that they are now. To be sure, that requires finding new drugs - it's a bit of a Red Queen's race in this business - but a lot of people would be fine if things just sort of rolled along without too many surprises or changes.
If that was ever a good fit for this industry, it isn't now. That makes it nerve-wracking to work in it, for sure, because if you feel that your job is really, truly safe then you're wrong. There are too many unpredictable events for that. I was involved in an interesting conversation the other day about investors in biopharma (and how passionately irrational some of the smaller ones can be), and we agreed that one reason for this is the large number of binary events: the clinical trial worked, or it didn't. The FDA approved your drug, or it didn't. You made your expected sales figures, or you didn't. And those are the expected ones, with dates on the calendar. There are plenty of what's-that-breaking-out-of-the-cloud-cover events, too. Trial stopped for efficacy! Trial stopped for tox! Early approval! Drug pulled from the market! It's like playing a board game with piles of real money (and with your career).
So Viehbacher's right on that point. But I part company with him on his earlier comments (basically, that if he was going to get anything innovative done at Sanofi, that he was going to have to go outside, because no one who wanted to innovate was working at a company like that in the first place). Even large companies have good people working at them - believe it or not! And some of them even have good ideas, too. But it can be harder for them to make headway in a large organization, he is right about that.
+ TrackBacks (0) | Category: Business and Markets | Who Discovers and Why
July 1, 2015
Longtime readers might recall that every so often I hit on the topic of the "dark matter" of drug target space. We have a lot of agents that hit G-protein coupled receptor proteins, and plenty that inhibit enzymes. Those, though, are all small-molecule binding sites, optimized by evolution to hold on to molecules roughly the size that we like to make. When you start targeting other protein surfaces (protein-protein interactions) you're heading into the realm where small molecules are not the natural mediators, and things get more difficult.
But all of those are still proteins, and there are many other types of biomolecules. What about protein/nucleic acid interactions? Protein/carbohydrate interactions? Protein-lipid targets? Those are areas where we've barely even turned on the lights in drug discovery, and past them, you'd have to wonder about carbohydrate/carbohydrate systems and the like, where no proteins are involved at all. None of these are going to be straightforward, but there's a lot to be discovered.
I'm very happy to report on this new paper from the Cravatt group at Scripps, which makes a foray into just this area. A few years ago, the group reported a series of inhibitors of monoacylglycerol lipase, as part of their chemical biology efforts on characterizing hydrolases. That seems to have led to an interest in lipid interactions in general, and this latest work is the culmination (so far) of that research path. It uses classic chemical-biology probes that mimic arachidonyl lipids and several other classes (oleoyl, palmitoyl, etc.). Exposing these to cell proteomes in labeling experiments shows hundreds and hundreds of interactions taking place, the great majority of which we have had no clue about at all. The protein targets were identified by stable-isotope labeling mass spec (comparing experiments in "light" cells versus "heavy" ones carrying the labels), and over a thousand proteins were pulled in with just the two kinds of arachidonyl probes they used (with some overlap between them, but some unique proteins to each sort of probe - you have to try these kinds of things from multiple directions to make sure you're seeing as much as possible).
As well as including many proteins whose functions are unknown, these lists were substantially enriched in proteins that are already drug targets. That should be enough to make everyone in the drug discovery business take a look, but if you're looking for more, try out the next part. The team went on to do the same sort of lipid interaction profiling after treatment of the cells with a range of inhibitors for enzymes involved in such pathways, and found a whole list of cross-reacting targets for these drugs that were unknown until now.
They then turned their attention to one of the proteins that was very prominent in the arachidonyl profiling experiments, NUCB1 (function unknown, but apparently playing a major role in lipid processing and signaling). Taking the arachidonyl probe structure and modifying it to make a fluorescent ligand led to a screening method for NUCB1 inhibitors. 16,000 commercial compounds were tested, and the best hit from this led to a series of indole derivatives. These were taken back around in further labeling experiments to determine the actual site of binding on NUCB1, and they seem to have narrowed it down (as well as gotten a start on the specific binding sites of many of the other protein targets they've discovered). There are also profiles of cellular changes induced by treatment with these new NUCB1 inhibitors, along with hypotheses about just what its real function is.
Holy cow, is this ever a good paper. I've just been skimming over the details; there's a lot more to see. I strongly recommend that everyone interested in new drug targets read it closely - you can feel a whole landscape opening up in front of you (thus the title of this post). This is wonderful work, exactly the kind of thing that chemical biology is supposed to illuminate.
+ TrackBacks (0) | Category: Chemical Biology | Drug Assays
June 30, 2015
When you look at the stock charts of the major pharma companies, there's not a lot of excitement to be had. Until you get to Eli Lilly, that is. Over the last year, the S&P 500 is up about 5%, and most of the big drug stocks are actually negative (Merck -0.4%, Sanofi down 6%, J&J down 7%, AstraZeneca down 13%). Pfizer pulled away from the index in February, and has held on to that gain (up 13% from a year ago), but Lilly - those guys were doing about as well as Pfizer until the last month or two, but have just ratcheted up since then, for a 1-year gain of over 32%. Why them?
It's all Alzheimer's speculation, as this Bloomberg piece goes into. And as has been apparent recently, Alzheimer's is getting a lot of speculation these days. BIogen really revved things up with their own early-stage data a few months back, and since then, if you're got an Alzheimer's program - apparently, any Alzheimer's program whatsoever - you're worth throwing money at. Lilly, of course, has been (to their credit) pounding away at the disease for many years now, expensively and to little avail. One of their compounds (a gamma-secretase inhibitor) actually made the condition slightly worse in the treatment group (more here), while their beta-secretase inhibitor failed in the usual way. But they've also been major players in the antibody field. Their solanezumab was not impressive in the clinic, except possibly in the subgroup of early-stage patients, and Lilly (showing a great deal of resolve, and arguably some foolhardiness) has been running another Phase III trial in that population.
They also extended the existing trial in that patient group, and are due to report data on that effort very soon - thus the run-up in the company's stock. This is going to be very interesting, for sure - it would be great for Alzheimer's patients (and for Lilly) if the results are clearly positive, but that (sad to say) is the least likely outcome. (I'm not just being gloomy for the sake of being gloomy - Alzheimer's antibodies have had a very hard time showing efficacy under any circumstances, and the all-mechanisms clinical success rate against the disease is basically zero). The same goes, of course, for the new Phase III trial itself. Things could well come out clearly negative, with the possible good results from the earlier trial evaporating the way subgroup analyses tend to when you lean on them. Or - and this is the results I fear the most - there could be wispy sorta-kinda hints of efficacy, in some people, to some degree. Pretty much like the last trial, after which Lilly began beating the PR drums to make things look not so bad.
The reason I think that this would be the worst result is that there is so much demand for something, for anything that might help in Alzheimer's that there would be a lot of pressure on the FDA to approve Lilly's drug, even if it still hasn't proven to do much. And this latest trial really is its best chance. It's in exactly the population (the only population) that showed any possible efficacy last time, so if the numbers still come out all vague and shimmery under these conditions, that's a failure, as far as I can see. No one wants to be in the position of explaining statistics and clinical trial design to a bunch of desperate families who may be convinced that a real Alzheimer's drug is being held up by a bunch of penny-pinching data-chopping bureaucrats.
And this brings us to TauRx. I still get mail about them, seven years after they made big news with a methylene-blue-based Alzheimer's therapy program. When last heard from, they were in Phase III, with some unusual funding, but there were no scientific results from them for a while. The company, though, has published several papers recently (many available on their web site), talking about their program.
Here's a paper on their Phase II results. It's a bit confusing. Their 138 mg/day dose was the most effective; the higher dose was complicated by PK problems (see below). When you look at the clinical markers, it appears that the "mild" Alzheimer's patients were hardly affected at all (although the SPECT imaging results did show a significant difference on treatment). But the "moderate" Alzheimer's treatment group showed several differences in various cognitive decline scores at the 138mg/day dose, but no difference in SPECT at all. Another paper, from JBC talks about compound activity in various cell models of tau aggregation. And this one, from JPET, is their explanation for the PK trouble. It appears that the redox state of the methylene blue core has a big effect on dosing in vivo. There are problems with dissolution, absorption (particularly in the presence of food), and uptake of the compound in the oxidized (methylene blue) state (which they abbreviate as MTC, methylthioninium chloride), but these can be circumvented with a stable dosage form of the reduced leuco compound (abbreviated as LTMX). There's apparently a ph-dependent redox step going on in gastric fluid, so things have to be formulated carefully.
One of the other things that showed up in all this work was a dose-dependent hematological effect, apparently based on methylene blue's ability to oxidize hemoglobin. It's not known (at least in these publications) whether dosing the reduced form helps out with this, but it's potentially a dose-limiting toxicity. So here's the current state of the art:
Although we have demonstrated that MTC has potential therapeutic utility at the minimum effective dose, it is clear that MTC has significant limitations relative to LMTX, which make it an inferior candidate for further clinical development. MTC is poorly tolerated in the absence of food and is subject to dose-dependent absorption interference when administered with food. Eliminating the inadvertent delayed-release property of the MTC capsules did not protect against food interference. Therefore, as found in the phase 2 study, MTC cannot be used to explore the potential benefit of higher doses of MT. Nevertheless, the delayed-release property of the MTC capsules permitted the surprising discovery that it is possible to partially dissociate the cognitive and hematologic effects of the MT moiety. Whether the use of LMTX avoids or reduces the undesirable hematologic effects remains to be determined. . .
The Phase III trials are ongoing with the reduced form, and will clearly be a real finger-crossing exercise, both for efficacy and tox. I wish TauRx luck, though, as I wish everyone in the AD field good luck. None of us, you know, are getting any younger.
+ TrackBacks (0) | Category: Alzheimer's Disease | Clinical Trials | Drug Assays | Pharmacokinetics | Toxicology
June 29, 2015
A reader sent along this link to an article at the New York Review of Books on the relentless emphasis on STEM jobs. The viewpoint of its author, Andrew Hacker, was preordained: he's a political scientist who started a controversy about ten years ago with an editorial wondering if mathematical education (we're talking up to the level of algebra) is even necessary or desirable. So he's not going to be a big booster of any push into science or engineering.
But keeping those biases in mind, he does take a useful tour through what I see as the error at the other end of the spectrum. I'm not ready to say (along with Hacker) that gosh, hardly anyone needs algebra, let alone anything more advanced. But I'm also not ready to say that we've got a terrible shortage of anyone who does know such things. That link quotes Michael Teitelbaum, and the NYRB article is partly a review of his Falling Behind, which is a book-length attempt to demolish the whole "STEM shortage" idea. He also notes another book:
James Bach and Robert Werner’s How to Secure Your H-1B Visa is written for both employers and the workers they hire. They are told that firms must “promise to pay any H-1B employee a competitive salary,” which in theory means what’s being offered “to others with similar experience and qualifications.” At least, this is what the law says. But then there are figures compiled by Zoe Lofgren, who represents much of Silicon Valley in Congress, showing that H-1B workers average 57 percent of the salaries paid to Americans with comparable credentials.
Norman Matloff, a computer scientist at the University of California’s Davis campus, provides some answers. The foreigners granted visas, he found, are typically single or unattached men, usually in their late twenties, who contract for six-year stints, knowing they will work long hours and live in cramped spaces. Being tied to their sponsoring firm, Matloff adds, they “dare not switch to another employer” and are thus “essentially immobile.” For their part, Bach and Warner warn, “it may be risky for you to give notice to your current employer.” Indeed, the perils include deportation if you can’t quickly find another guarantor.
Here's Matloff's page on the subject, and his conclusions seem (to me) to ring unfortunately true. I can't come up with any other way to square the statements and actions of (to pick one example) John Lechleiter, CEO of Eli Lilly. So I'm in an uncomfortable position on this issue: I am pro free-trade, and philosophically I'm pro-immigration (especially the immigration of the sorts of talented, hard-working people that all these US companies want to bring in). That philosophical leaning of mine, though, is predicated on these people being able to pitch in to a growing economy, but not if they're just being used as a means to dump existing workers in favor of cheaper (and more disposable) replacements. And I hate sounding like a nativist anti-immigration yahoo, and I similarly hate sounding (at another end of the political spectrum) like some kind of black-bandanna-wearing anti-corporate agitator. (As mentioned above, I'm also not happy about finding myself in some agreement with some guy whose other positions include the idea that algebra should be dumped from schools as a useless burden). I look around, and wonder how I ended up here. Strange times.
+ TrackBacks (0) | Category: Business and Markets
Bruce Booth has a long post on external R&D in biopharma. He's mostly talking about some of the newer ways to do that, rather than traditional deals and outsourcing. These include larger companies partnering with VC firms to launch smaller ones, large investments in the smaller players with specific rights to buy some of the successes, etc. But the larger players have to be able to keep their hands off:
That said, a number of large companies have also been attempting to do this on their own, without venture involvement at last initially; as far as I can tell, these have had limited “success” to date. GSK’s experiment with Tempero Pharmaceuticals is a good example: founded around great science, the idea was to create a standalone biotech with its own governance that GSK could leverage for Th17 projects in the future. Unfortunately, although the research programs advanced, the company appears to have been unable to escape the gravitational pull of the GSK organization – accessing internal research infrastructure led to conformity, financial costs were all consolidated leading to compliance and internalization, and its employees were eventually just integrated back into GSK.
Then you have the corporate-backed venture capital operations that many companies have set up. People are arguing about the direct benefits that these investment groups provide, but there's little doubt that they help keep the whole ecosystem of small company formation going, and that's definitely worthwhile.
The various precompetitive consortia are another aspect. I've wondered how some of these are going, myself - there's not a lot of hard information yet for some of them. And finally, there are the attempts by several companies to set up their own "skunk works" type groups, apart from the main organization. To my eye, these have even more risk of being swallowed back up by the main company's organizational style and attitude than those officially launched companies (like the GSK example above). It's not just the drug industry - plenty of other sectors have seen attempts at "With us but not of us" branches (e.g., Saturn and GM), and it's very hard to do.
Bruce is looking on the bright side, though:
By bringing high doses of innovative creativity from the “periphery” – via the above-mentioned biotech experiments enabled by external innovation – a leadership team can inoculate their R&D organization’s culture with different strains of thinking, different intellectual antigens to prime new ways of doing things. Simple strategic proximity and openness can afford real opportunities for this interaction if done at significant scale, where the “periphery” achieves a meaningful mindshare (and budgetary support) of the organization.
The "significant scale" part is a key, I'd say. I think that many of the failures in these approaches have been when a company wants to do something different, but not, you know, really all that different. Just different enough for Wall Street to like them again, or different enough so that you can go to the CEO (or the board) and tell them how innovative you've been in shaking things up. But if you're not unnerved and excited, wondering what's going to happen next, maybe hopeful and maybe somewhat scared, then thing haven't been shaken up. Those emotions, and the mental attitudes that go along with them, are part of the small-company secret sauce that you're trying to get ahold of. Without them, you haven't accomplished what you're trying to accomplish, but not everyone really wants them as much once they've started to experience them.
In order to capture the tangible and intangible value from external R&D models, organizations have to overcome a set of established, pervasive, and frequently corrosive mental models that prevent successful engagement in the ecosystem. These are challenging to unwind and impair many organizations today. . .
. . .“Protecting our interests”. This is one of the most pernicious of mental models that renders many Pharma groups incapable of creative external R&D, and is based in the paranoia that everyone is out to screw you. Lawyers are paid to be conservative, think about every scenario, extract every protection possible, and create piles of paperwork. I’m convinced that Pharma’s corporate deal lawyers suffocate more creative deals than they are able close – they are the ultimate “Deal Prevention Officer” inside of many companies.
The post goes on to list several more of these - go over and have a look, and if you work at (or have worked at) a large company, you'll recognize them. Getting around or over these, as Bruce says, is essential. But no one quite has a defined set of steps for doing that (despite many consultants who will sell you just such a list). In the worst organization, that paranoia mentioned above, that everyone is out to screw you, has infected the employees in their dealings with their own upper management. And if things have progressed that far, you're going to have trouble reinvigorating the R&D by any means whatsoever.
But for organizations that can make the leap, the sorts of models described in the post are definitely worth a look. It's too early, in most of the cases, to say how the returns are on them, but it's a good sign that several companies have taken serious attempts at doing things differently.
+ TrackBacks (0) | Category: Business and Markets
June 26, 2015
Here's a good overview of phenotypic screening from a group at Pfizer in Science Translational Medicine. It emphasizes, as it should, that this is very much a "measure twice, cut once" field - a bad phenotypic screen is the worst of both worlds:
The karyotype of a cell represents one of its most fundamental and defining characteristics. A large number of tumor-derived cell lines display substantial genetic abnormalities, with some extreme examples bearing in excess of 100 chromosomes as opposed to the expected 46. By that measure, the widely used human monocytic THP-1 cell line would fare well considering its overall diploid character. Nonetheless, triploidy is observed for four chromosomes and monoploidy for another, along with the entire deletion of chromosome X and substantial chromosomal rearrangements. A simple question pertains: Is this a monocyte? In other words, can we expect a faithful representation of all of the functions of a primary human monocyte from such a cell?
Using primary tissue from human patients has its own problems - availability, variation from batch to batch, limited useful lifetime in culture - but those are (in most cases) worth living with compared to the limitations of too-artificial cell lines. The authors also emphasize care in picking what ways you'll stress or stimulate the cells to mimic a disease state, and making sure that the assay readouts are as closely matched as possible to clinical end points.
The track record of gene expression readouts such as reporter gene assays is lackluster with respect to phenotypic drug discovery; no recent (>1998), first in class, small-molecule drug has originated from such an assay. A potential explanation is that mechanisms influencing gene expression represent only a fraction of all mechanisms affecting a given phenotype. . .An in-house study aimed at discovering previously unknown mechanisms leading to the up-regulation of apolipoprotein E (ApoE) secretion compared confirmed hits obtained in the same cellular system using reporter gene and enzyme-linked immunosorbent assay readouts. Although the reporter gene assay successfully identified compounds that provide large increases in ApoE secretion, it missed half of the overall hit set. . .
None of these recommendations are easy, and (from an impatient perspective) all they're doing is slowing down the implementation of your screen. Detail after detail, doubt after doubt! But your screening idea needs to be able to stand up to these, and if you just plunge ahead, you run a serious risk of generating a large amount of complicated, detailed, irrelevant data. The worse kind, in other words.
Every drug program, and every screen, rests on a scaffolding of assumptions. You'd better be clear on what they are, and be ready to justify them. In a target-directed screen, a big one is "We know that this is a key part of the disease mechanism", and (as the number of failures in Phase II show us), that's not true anywhere as often as we'd like. Phenotypic screening dodges that one, a big point in its favor, but replaces it with another big leap of faith: "We know that this assay recapitulates the human disease". You pays your money, and you takes your choice.
+ TrackBacks (0) | Category: Drug Assays
I truly enjoyed this look at Dr.
Robert David Perlmutter of "Grain Brain" fame, another branch of the same intellectual family tree as Drs. Mercola and Oz. Wonderful cures! Suppressed by evil forces! Under our noses all along! Exactly the opposite of the wonderful cures claimed by the same guy in the 1990s. . .uh, what? Fun stuff. But it won't convince the true believers; nothing will.
+ TrackBacks (0) | Category: Snake Oil
June 25, 2015
I've heard from sources this morning that the folks at Bristol-Myers Squibb in Wallingford have received, out of the blue, one of those sudden sitewide meeting announcements that often portend big news. I'll leave the comments section of this post for updates from anyone with more info - I'll be out of communication for a while this morning at the ChemDraw event.
Update: OK, the press release has just come out. The company is going to open up a big new site in Cambridge, and here's the key part:
In Cambridge, Bristol-Myers Squibb scientists will focus on the company’s ongoing discovery efforts in genetically defined diseases, molecular discovery technologies and discovery platform chemistry in state-of-the-art lab space. In addition to relocating up to 200 employees from its Wallingford, Conn. and Waltham, Mass. sites, and a limited number from its central New Jersey locations, the company expects to recruit scientists from the Cambridge area. As part of this transition, the Waltham site is expected to close in early 2018. The existing site in Wallingford will also close in early 2018 with up to 500 employees relocating to a new location in Connecticut.
+ TrackBacks (0) | Category: Business and Markets
You may recall the report of the synthetic analgesic tramadol as a natural product from Cameroon, and the subsequent report that it was nothing of the kind. (That's the paper that brought the surprising news that local farmers were feeding the drug to their cows). Now the first group (a team from Nantes, Lodz, and Grenoble) is back with a rebuttal.
They note that previous report, but also say that tramadol has been isolated from samples in a bioreserve, where human cattle grazing is prohibited. The rest of the paper goes on to analyze isolated tramadol samples by NMR, looking for variations in the 13C levels to try to come up with a biosynthetic pathway. Isotopic distribution is the way to do that, for sure - the various synthetic steps used to make a compound (and its precursors) can be subject to kinetic isotope effects, and over time, these can build up to recognizable signatures. An example of this is the identification of endogenous human testosterone versus the plant-derived material found in supplements.
The authors go over how the various structural features found in tramadol have also been noted in other natural products, and propose some biosynthetic pathways based on these and on the observed 13C ratios (which they report do vary from synthetic samples). Probably the strongest evidence is from the methyl groups, which show evidence of having been delivered by something like S-adenosylmethionine. Overall oxygen isotope ratios are also apparently quite different than commercial samples.
So the battle is joined! The confounding factors I can think of, off the top of my head, are possible differences in the synthetic routes (and thus isotope ratios) of the commercial material used here (from Sigma-Aldrich) and the material available in Cameroon. But then, the authors state here that their samples were obtained from a part of the nature reserve where people are not farming cattle. None of us are exactly in a position to judge that - I'm not going to the boonies of Cameroon to find out - but if they're right about that, it's also a good argument in their favor.
But the only way to really resolve this is to grow some African peach trees, feed them labeled precursors, and see if strongly labeled tramadol comes out the other end. This paper says that such an experiment is "not currently feasible", but I have to wonder if there's an arboretum somewhere that has such trees in it (and if such trees produce tramadol already). There will surely be another chapter to this story - or two, or three.
+ TrackBacks (0) | Category: Analytical Chemistry | Natural Products
June 24, 2015
If you have a chance to stop by, Thursday the 25th is the "30th Anniversary of ChemDraw" event in Cambridge (MA). Here's the link - I'm going to reminisce a bit in the morning's program about the pre- and early post-ChemDraw days (as I have here on occasion). If you'd told me about this event back in 1985, I don't think I would have believed you.
Update: Prof. Dave Evans will be on hand to talk about the early days - here's his memoir of that period, in Angewandte Chemie.
+ TrackBacks (0) | Category: General Scientific News
Here's another Big Retrospective Review of drug pipeline attrition. This sort of effort goes back to the now-famous Rule-of-Five work, and readers will recall the Pfizer roundup of a few years back, followed by an AstraZeneca one (which didn't always recapitulate the Pfizer pfindings, either). This latest is a joint effort to look at the 2000-2010 pipeline performance of Pfizer, AstraZeneca, Lilly, and GSK all at the same time (using common physical descriptors provided to a third party, Thomson Reuters, to deal with the proprietary nature of the compounds involved). The authors explicitly state they've taken on board the criticisms of these papers that have been advanced in the past, so this one is meant to be the current state of the art in the area.
What does the state of the art have to teach us? 812 compounds are in the data set, with their properties, current status, and reasons for failure (if they have indeed failed, and believe me, those four companies did not put eight hundred compounds on the market in that ten-year period). The authors note that there still aren't enough Phase III compounds to draw as many conclusions as they'd like: 808 had a highest phase described, 422 of those were still preclinical, 231 were in Phase I, 145 in Phase II, 8 were in Phase III and 2 in Phase IV/postmarketing studies. These are, as the authors not, not quite representative figures, compared to industry-wide statistics, and reflect some compounds (including several that went to market) that the participants clearly have left out of their data sets. Considering the importance of the (relatively few) compounds in the late stages, this is enough to make a person wonder about how well conclusions from the remaining data set hold up, but at least something can be said about earlier attrition rates (where that effect is diluted).
605 of the compounds in the set were listed as terminated projects, and 40% of those were chalked up to preclinical tox problems. Second highest, at 20% was (and I quote) "rationalization of company portfolios". I divide that category, myself, into two subcategories: "We had to save money, and threw this overboard" and "We realized that we never should have been doing this at all". The two are not mutually exclusive. As the paper puts it:
. . .these results imply that substantial resources are invested in research and development across the industry into compounds that are ultimately simply not desired or cannot be progressed for other reasons (for example, agreed divestiture as part of a merger or acquisition). In addition, these results suggest that frequent strategy changes are a significant contributor to lack of research and development success.
You think? Maybe putting some numbers on this will hammer the point home to some of the remaining people who need to understand it. One can always hope. At any rate, when you analyze the compounds by their physiochemical properties, you find that pretty much all of them are within the accepted ranges. In other words, the lessons of all those earlier papers have been taken on board (and in many cases, were part of med-chem practice even before all the publications). It's very hard to draw any conclusions about progression versus physical properties from this data set, because the physical properties just don't very all that much. The authors make a try at it, but admit that the error bars overlap, which means that I'm not even going to bother.
What if you take the set of compounds that were explicitly marked down as failing due to tox, and compare those to the others? No differences in molecular weight, no differences in cLogP, no differences in cLogD, and no differences in polar surface area. I mean no differences, really - it's just solid overlap across the board. The authors are clearly uncomfortable with that conclusion, saying that ". . .these results appear inconsistent with previous publications linking these parameters with promiscuity and with in vivo toxicological outcomes. . .", but I wonder if that's because those previous publications were wrong. (And I note that one such previous publication has already come to conclusions like these). Looking at compounds that failed in Phase I due to explicit PK reasons showed no differences at all in these parameters. Comparing compounds that made it only to Phase I (and failed for any reason) versus the ones that made it to Phase II or beyond showed, just barely, a significant effect for cLogP, but no significant effect for cLogD, molecular weight, or PSA. And even that needs to be interpreted with caution:
. . .it is not sufficiently discriminatory to suggest that further control of lipophilicity would have a significant impact on success. Examination of how the probabilities of observing clinical safety failures change with calculated logP and calculated logD7.4 by logistic regression showed that there is no useful difference over the relevant ranges. . .
So, folks, if your compounds most fit within the envelope to start with (as these 812 did), you're not doing yourself any good by tweaking physiochemical parameters any more. To me, it looks like the gains from that approach were realized early on, by trimming the fringe compounds in each category, and there's not much left to be done. Those PowerPoint slides you have for the ongoing project, showing that you've moved a bit closer to the accepted middle ground of parameter space, and are therefore making progress? Waste of time. I mean that literally - a waste of time and effort, because the evidence is now in that things just don't work that way. I'll let the authors sum that up in their own words:
It was hoped that this substantially larger and more diverse data set (compared with previous studies of this type) could be used to identify meaningful correlations between physicochemical properties and compound attrition, particularly toxicity-based attrition. . .However, beyond reinforcing the already established general trends concerning factors such as lipophilicity (and that none too strongly - DBL), this did not prove generally to be the case.
Nope, as the data set gets larger and better curated, these conclusions start to disappear. That, to be sure, is (as mentioned above) partly because the more recent data sets tend to be made up of compounds that are already mostly within accepted ranges for these things, but we didn't need umpteen years of upheaval to tell us that making compounds that weight 910 with logP values of 8 are less likely to be successful. Did we? Too many organizations made the understandable human mistake of thinking that changing drug candidate properties was some sort of sliding scale, that the more you moved toward the good parts, the better things got. Not so.
What comes out of this paper, then, is a realization that watching cLogP and PSA values can only take you so far, and that we've already squeezed everything out of such simple approaches that can be squeezed. Toxicology and pharmacokinetics are complex fields, and aren't going to roll over so easily. It's time for something new.
+ TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History | Pharmacokinetics | Toxicology
June 23, 2015
Here's a disturbing read for you: the author of this paper (Morten Oksvold, of Oslo University) sat down and did what none of us ever do. He chose three different journals in the oncology field, picked one hundred and twenty papers, at random, from their recent issues, and carefully looked every one of them over for duplications in the figures and data. On PubPeer, you can see what he found. Nearly a quarter of the papers had problems.
In case you're wondering, this proportion didn't vary significantly between the three journals, which were chosen at three different levels of prominence (as measured by impact factor). Time, chance, and figure duplication happeneth to them all. I should note that the duplication comes in several different flavors. The least concerning is the appearance of the same control experiments in more than one figure in a paper. One might wish for the controls to be run more than once - in fact, I'd most definitely wish for that - but the authors are not necessarily implying that these are separate experiments. (They're not dispelling any impression that they are separate, either). When the same control experiments (same gels) appear in more than one paper, that seems to be a further step down. The objects of the two papers are (presumably!) different, and there's even more reason to assume that the authors have, in fact, run this again and aren't just reusing the same control that looked so good that time. That's the problem - when you do this sort of thing, it makes a person wonder if there was only that one time.
There are plenty of less ambiguous cases, unfortunately. About half the cases are supposed to be from different experiments entirely. In both gel and microscope images, you can find examples of the same image representing what should be different things, and excuses run out at this point. It goes on. Oksvold then contacted the authors of all twenty-nine problematic papers to ask them about what he'd found. And simultaneously, he wrote the editorial staffs of all three journals, with the same information. What came of all this work? Well, "only 1 out of 29 cases were apparently clarified by the authors, although no supporting data was supplied", and he got no reply at all from any of the journal editors. Nice going, International Journal of Oncology, Oncogene, and Cancer Cell.
My take on all this is that this is a valuable study, with some limitations that haven't been appreciated by everyone commenting on it. Earlier this year, when the material started appearing on PubPeer, there were statements flying around that "25% of all recent cancer papers are non-reproducible!" This work doesn't show that. What it shows is that 25% of recent cancer papers appear to have duplicated figures in them (not that that's a good thing). But, as mentioned, at least half the examples are duplicated controls - correctly labeled, but reused. Even the nastier cases don't necessarily make the paper unreproducible. You'd have to dig into them and see how many of them affected the main conclusions. I'd guess that the majority of them do, but they don't have to - people can also cut corners in the scaffolding, just to get everything together and get the paper out the door. I am not defending that practice, but I don't want this study to be misinterpreted. It's worrisome enough as it is, without any enhancement.
I think what can be said, then, is that "25% of recent cancer papers have duplicated figures in them, which matter in some cases much more than others, since they appear for reasons ranging from expedience to apparent fakery". Not as catchy, admittedly, but still worth paying attention to. (More from Neuroskeptic here).
+ TrackBacks (0) | Category: The Scientific Literature
June 22, 2015
Here's an odd thing, noted by a reader of this site. Organic Letters has a retraction of a paper in the Baldwin group at Oxford, "Biomimetic Synthesis of Himbacine".
This Letter has been retracted, as it was found that (a) spectra of the linear precursor, compound 14, differed when its synthesis was repeated and (b) spectra published for several compounds resulting from compound 14 (compounds 3, 4, and 20) were scanned from other papers.
Those other papers are the ones from the Chackalamannil et al. synthesis of himbacine, which took someone a fair amount of nerve. I will assume that Jack Baldwin did not scan in the spectra and claim them for his own. The other authors on the paper are Kirill Tcabanenko, Robert Adlington, and Andrew R. Cowley, for whom I can find no recent information. There's a story here, for sure, but I don't know its details. . .
+ TrackBacks (0) | Category: The Dark Side | The Scientific Literature
I'd like to open up the floor for nominations for the Blackest Art in All of Chemistry. And my candidate is a strong, strong contender: crystallization. When you go into a protein crystallography lab and see stack after stack after stack of plastic trays, each containing scores of different little wells, each with a slight variation on the conditions, you realize that you're looking at something that we just don't understand very well.
Admittedly, protein crystallography is the most relentlessly voodoo-infested territory in the field, but even small-molecule crystals can send chills down your spine (as with the advent of an unwanted polymorph). For more on those, see here, here, and here, and this article. Once you start having to explore different crystallization conditions, it's off into the jungle - solvent (and a near-infinite choice of mixtures of solvents), temperature, heating and cooling rates along the way, concentration, stirring rate, size and material of the vessel - all of these can absolutely have an effect on your crystal formation, and plenty of more subtle things can kick in as well (traces of water or other impurities, for example).
To give you an idea, with a relatively simple molecule, fructose was apparently known for decades as the "uncrystallizable sugar". Eventually, someone sat down and brute-forced their way through the problem, making concentrated solutions and seeding them with all sorts of crystals of related compounds (another black art, and how). As I recall, the one nucleated with a crystal of pentaerythritol crystallized, giving the world the first crystalline fructose ever seen. Other conditions have been worked out since then (in crystallization there are always other possible conditions). But that's an example of the craziness. Does anyone have a weirder field or technique to beat it?
+ TrackBacks (0) | Category: Life in the Drug Labs
June 19, 2015
How much should drugs cost? That question can be answered in a lot of different ways, and at many levels of economic literacy. But the Wall Street Journal is reporting on a new comparison tool from a group at Sloan-Kettering, the "Drug Abacus".
As you will have already guessed, under many plausible assumptions (such as a year of life being worth $120,000, with 15% taken off for side effects), the model reports that many cancer drugs are overpriced. The two worst are Blincyto (blinatomumab) and Provenge. On the other hand, the nitrogen mustard derivative Treanda (bendamustine) comes out as worth nearly three times as much as Teva is charging for it. Your own mileage will vary, of course: some people will regard a year of life as worth substantially more than $120K. Note that as you get closer to $200,000 for a year of life, the majority of the drugs in the calculator become relative bargains. Now, most people will find the whole process of arriving at any such figure to be distasteful and disturbing (an understandable emotional response that underlies a lot of wrangling about the drug industry, I think). As I mentioned in that post, the English language has an entirely different word for "customer of a physician" than it does for a customer of anyone else providing any other service.
The Drug Abacus is an attempt to bring the various factors of oncology drug prices into view: how much extra lifetime a given drug can provide, what the side effects (and thus quality of life) might be, and larger factors like how novel the medication is and what the overall population burden of that indication is. That mixture, actually, is one of the problems I have with this idea (which in principle I think is worth doing). An individual patient is not going to be deciding the price they're willing to pay for a new drug based on its overall effect on the population, or how it fits into the therapeutic landscape. They want to know if it will help them, personally, and by how much
Another one of the sliding factors in the model, development cost, I don't think should even be in there at all. Drugs should be evaluated by what they do, not how easy or hard they were to find and develop. That factor can end up being misused in many ways: people can complain that Drug X had a faster path through the clinic, so it should cost less, and companies might turn around and argue that Drug Y had to go back for another Phase III, so therefore it should cost more. Neither of these make sense. Each company has to look at its overall R&D spending and its success rate and adjust its pricing based on the whole picture.
As I did in that post I linked to a couple of paragraphs ago, I'm going to use the always-annoying car analogy. What would a "car abacus" site look like? The various sliders would represent things that people take into consideration when they buy a car: its utility, its stylishness, its repair and upkeep costs, its resale value and expected lifetime. I don't think that there would be a "development costs" slider, would there? Or one for the car's overall use to society? But no matter what, some cars would certainly come out looking overpriced, unless you attach a large value to something like "prestige" or "stylishness", and even then, I'm not sure that the numbers would come out right. Manufacturers, though, charge what people are willing to pay, and if some people are willing to pay what seems like an irrational cost for a car, then so much the better.
The same principle operates at a coffee shop. You'll be charged extra for some foamed milk in your coffee, or a shot of hazelnut syrup or some "pumpkin spice", if it's November. That extra price, it has to be noted, is way out of all proportion for the cost of ingredients or the extra labor involved. The coffee shop is differentiating its customers, looking for the ones who have no objection to paying more for something different, and making sure to offer them something profitable when they come along. (And a chi-chi coffee shop, to extend the analogy, starts differentiating its customers even before they walk in the door, by the way it's decorated and the signals it sends).
But purchasing better health is, in the end, not quite like purchasing a car or even a FrappaMochaLatteChino. Thinking of it in those terms, I believe, can illuminate some aspects that otherwise get obscured, but in the end it's a much more basic and personal decision. And as for personal decisions, Milton Friedman divided expenditures into four categories: when we spend our money on ourselves, when we spend our own money on other people, when we spend other people's money on ourselves, and when we spend other people's money on other people. Since he was famously libertarian, it will come as no surprise that he regarded the first category as the one where people were most likely to make better decisions (and it should also come as no surprise that government spending lands, as it would have to, in the bottom category). There are objections to this classification - you could say that the first category is also the one in which we might allow our emotions to override rational decision making, whereas the last one has the greatest scope for calm cost/benefit analyses.
Those objections are at the heart of the debate about drug costs, because there is nothing that we are more likely to become irrational over than our own health or that of someone close to us. An extremely uncomfortable thought experiment is to imagine a close family member becoming gravely ill, and then figuring out what you would be willing to pay to make them better. (You can extend your discomfort by imaging whether or not you'd be willing to pay that same cost, out of pocket, to extend a similar benefit to someone on the other side of the world whom you've never met and never will. That's the line of reasoning taken by Adam Smith in The Theory of Moral Sentiments, subject of a recent popular exposition.). OK, back to your close family member. Got a figure in mind for a cure? How about an extra year of life, then? After all, that's the unit of the Drug Abacus calculations. What about an extra year, but they can't get out of bed? An extra month? How about everything you have, house and all, flat broke for another ten minutes? Ten seconds? At some point, Homo economicus gets up off the floor, having been clubbed over the head at the beginning of this exercise, and says "Hmm. Maybe not."
But it takes a while for that to happen, understandably. And the whole thing, as mentioned, is wildly unpleasant even as a thought experiment, so going through it in real life is an experience you wouldn't wish on anyone. The debate on drug pricing, though, grabs us by the backs of our heads and forces our noses down into the subject.
+ TrackBacks (0) | Category: Drug Prices
June 18, 2015
I've mentioned numerous times around here that therapies directed against aging in general have a rough regulatory outlook. The FDA, in general, has not considered aging a disease by itself, but rather the baseline against which disease (increasingly) appears. This has meant that companies with ideas for anti-aging therapies have had to work them into other frameworks - diabetes, osteoporosis, what have you - in order to get clinical data that the agency will be able to work with.
Now, according to Nature News, the group that's testing metformin for a variety of effects in elderly patients is going to meet with the FDA to address just this issue:
Barzilai and other researchers plan to test that notion in a clinical trial called Targeting Aging with Metformin, or TAME. They will give the drug metformin to thousands of people who already have one or two of three conditions — cancer, heart disease or cognitive impairment — or are at risk of them. People with type 2 diabetes cannot be enrolled because metformin is already used to treat that disease. The participants will then be monitored to see whether the medication forestalls the illnesses they do not already have, as well as diabetes and death.
On 24 June, researchers will try to convince FDA officials that if the trial succeeds, they will have proved that a drug can delay ageing. That would set a precedent that ageing is a disorder that can be treated with medicines, and perhaps spur progress and funding for ageing research.
During a meeting on 27 May at the US National Institute on Aging (NIA) in Bethesda, Maryland, Robert Temple, deputy director for clinical science at the FDA’s Center for Drug Evaluation and Research, indicated that the agency is open to the idea.
Metformin and rapamycin are two of the compounds that would fit this way of thinking, and there will surely be more. Let's face it - any other syndrome that caused the sorts of effects that age does on our bodies would be considered a plague. To quote Martin Amis's lead character in Money, who's thinking about an actress he's casting in a movie who "time had been kind to", he goes on to note that "Over the passing years, time had been cruel to nearly everybody else. Time had been wanton, virulent and spiteful. Time had put the boot in." It sure does.
But we're used to it, and it happens to everyone, and it happens slowly. Does it have to be that way? The history of medicine is a refusal to play the cards that we've been dealt, and there's no reason to stop now.
+ TrackBacks (0) | Category: Aging and Lifespan | Regulatory Affairs
A huge amount of medicinal chemistry - and a huge amount of medicine - depends on small molecules binding to protein targets. Despite decades of study, though, with all the technology we can bring to bear on the topic, we still don't have as clear a picture of the process as we'd like. Protein structure is well-known as an insanely tricky subject, and the interactions a protein can make with a small molecule are many, various, and subtle.
This gets re-emphasized in this new paper from the Shoichet group at UCSF. They're using a well-studied model protein pocket (the L99A mutant of T4 lyzozyme, itself an extremely well-studied protein). That cavity is lined with hydrophobic residues, and (being a mutant at a site without function) it's not evolutionarily adapted for any small-molecule ligands. It's just a plain, generic roundish space inside a protein, and a number of nonpolar molecules have had X-ray structures determined inside it.
What this paper does is determine crystal structures (to about 1.6A or better) for a series of closely related compounds: benzene, toluene, ethylbenzene, n-propylbenzene, sec-butylbenzene, n-butylbenzene, n-pentylbenzene, and n-hexylbenzene. That's about as nondescript a collection of aryl hydrocarbons as you could ask for, differing from each other only by the number and placement of methylene groups. (Three of these had already been determined in earlier studies). How does the protein cavity handle such similar compounds?
By doing different things. They found that one nearby part of the protein, the F helix, adopts two different conformations in the same crystal, in different proportions varying with the ligand. (The earlier structures from the 1990s show this, too, although it wasn't realized at the time). The empty cavity and the benzene-bound one have one "closed" conformation, but even just moving up to toluene gives you about 20% of the intermediate one, with a shifted F-helix. By the time you get to n-butylbenzene, that conformation is now about 60% occupied in the crystal structure, with 10% of the "closed", and now 30% of a third, "open" state. The pentyl- and hexyl-benzene structures are mostly in the open state. Digging through the PDB for other lyzozyme cavity structures turned up examples of all three forms.
These adjustments come via rearrangement of hydrogen bonds between the protein residues, and it apparently has a number of tiny rachet-like slips it can make to accommodate the ligands. And there's the tricky part: these changes are all balances of free energies - the energy it takes for the protein to shift, and the energy differences between the various forms once the shift(s) have taken place, which include the interactions with the various ligands. The tradeoffs and payoffs of these sorts of movements are the nuts-and-bolts, assembly-language level of ligand binding.
And it has to be emphasized that this is a very simple case indeed. No polar interactions at all, no hydrogen bonds, no water molecules bridging or being displaced from the protein surface, no halogen bonds, no pi-stacking or edge-to-pi stuff. There are also, it has to be noted, other ways for proteins to deal with such small changes. The authors here, in fact, looked through the literature and the PDB for just such series to compare to, and found (for example) that the enzyme enoyl-ACP reductase (FabI) doesn't take on such discrete states - instead, a key residue just sort of smoothly slides into a range of positions. That said, they also found examples where the behavior is more like the mode-switching seen here.
If that's common, then calculating ligand binding gets more complicated, which is not what it was needing. These are about the smallest and least substantial ligand changes you can come up with, and here's a protein shifting around quite noticeably between an ensemble of low-energy states to deal with them. The problem is, there are a huge number of such states available to most binding sites, and distinguishing them from each other, or from the original binding mode, by first principles is (in many cases) going to be beyond our capabilities for now.
Here's Wavefunction's take on these results - he says that "The conclusions of the paper are a bit discomforting. . ", and if I were a molecular modeler, I'd say the same thing!
+ TrackBacks (0) | Category: Analytical Chemistry
June 17, 2015
Why do we test new drug candidates on animals? The simple answer is that there's nothing else like an animal. There are clearly chemical and biological features of living systems that we don't yet understand, or even realize exist - the discovery of things like siRNAs is enough proof of that. So you're not going to be able to build anything from first principles; there isn't enough information. Your only hope is to put together something that matches the real thing as closely as possible, using original cells and tissues as much as possible.
The easiest way to do that, by far, is to just give your compounds to a real animal and see what happens. But you have to think carefully. Mice aren't humans, and neither are dogs (and nor are dogs mice, for that matter). Every species is different, sometimes in ways that make little difference, and sometimes in ways that can mean life or death. Animal testing is the only way to access the complexity of a living system, and the advantages of that outweigh the difficulties of figuring out the all differences when moving on to humans. But those difficulties are very real nonetheless. (One way around this would be to make animals with as many humanized tissues and systems as possible, although that's not going to make anyone any happier about testing drugs on them. The other way is try to recapitulate a living system in vitro.
But the cells in a living organ are different than the cells in a culture dish, both in ways that we understand and in ways that we don't. The architecture and systematic nature of a living organ (a pancreas, a liver) is very complex, and subject to constant regulation and change by still other systems, so taking one type of cell and growing it up in a roller bottle (or whatever) is just not going to recapitulate that. Liver cells, for example, will still do some liver-y things in culture. But not all of the things, and not all of them in the same way. And the longer they're grown in culture, the further they can diverge from their roots.
There has been a huge amount of work over the years trying to improve this situation. Growing cells in a more three-dimensional culture style is one technique, although (since we don't make blood vessels in culture tubes) there's only so far you can take that. Co-cultures, where you try to recreate the various populations of cell types in the original organ, are another. But those are tricky, too, because all the types of cell can change their behaviors in different ways under lab conditions, and their interactions can diverge as well. Every organ in a living creature is a mixture of different sorts of cells, not all of whose functions are understood by a long shot.
Ideally, you'd want to have many different such systems, and give them a chance to communicate with each other. After all, the liver (for example) is getting hit with the contents of the hepatic portal vein, full of what's been absorbed from the small intestine, and is also constantly being bathed with the blood supply from the rest of the body, whose contents are being altered by the needs of the muscles and other organs. And it's getting nerve signals from the brain along with hormonal signals from the gut and elsewhere, with all these things being balanced off against each other all the time. If you're trying to recreate a liver in a dish, you're going to have to recreate these things, or (more likely) realize that you have to fall short in some areas, and figure out what differences those shortfalls make.
The latest issue of The Economist has a look at the progress being made in these areas. The idea is to use the smallest cohorts of cells possible (these being obtained from primary human tissue), with microfluidic channels to mimic blood flow. (Here's a review from last year in Nature Biotechnology). It's definitely going to take years before these techniques are ready for the world, so when you see headlines about how University of X has made a real, working "(Organ Y) On a Chip!", you should adjust your expectations accordingly. (For one thing, no one's trying to build, say, an actual working liver just yet. These studies are all aimed at useful models, not working organs). There's a lot that has to be figured out. The materials from which you make these things, the sizes and shapes of the channels and cavities, the substitute for blood (and its flow), what nutrients, hormones, growth factors, etc. you have in the mix (and how much, and when) - there are a thousand variables to be tinkered with, and (unfortunately) hardly any of them will be independent ones.
But real progress has been made, and I have no doubt that it'll continue to be made. There's no reason, a priori, why the task should be impossible; it's just really hard. Worth the effort, though - what many people outside the field don't realize is how expensive and tricky running a meaningful animal study really is. Running a meaningful human study is, naturally, far more costly, but since the animal studies are the gatekeepers to those, you want them to be as information-rich, as reproducible, and as predictive as possible. Advanced in vitro techniques could help in all those areas, and (eventually) be less expensive besides.
+ TrackBacks (0) | Category: Animal Testing | Drug Assays | Toxicology
June 16, 2015
Since I mentioned the Bradner lab's protein-destruction technology recently, I should also highlight this recent paper from Craig Crews and co-workers. In that post, I noted that their PROTAC method has been heading in a small-molecule direction, and this paper certainly confirms that. Instead of the phthalimide (as in the Bradner work), they're using a hydroxyproline/arylthiazole compound as the common recognition element.
But the idea is the same as their earlier PROTAC work (and as in the Bradner work). That small molecule recruits the E3 ubiquitin ligase complex containing the von Hippel/Lindau protein, and coupling it to a ligand for a given target protein brings it into close enough proximity to start ubiquitinating the target. Then, it's all over - it's off to the proteosome for removal. (The whole process reminds me of E. O. Wilson's discovery of the "ant death pheromone"). Here's a perfectly useful, active protein being dragged off to the protein graveyard, and it would be wondering, if it could, just what brought this on.
Therapeutically, of course, there are a lot of proteins that could be worth sending there. This latest paper has mouse data, in xenograft models, that shows this happening with two different targets (ERR alpha and RIPK2). Even if you don't have access to the full text, you can see these ligands (and the VHL targeting compound) in the Supplementary Information file). These sorts of experiments are convincing enough that Merck has done a good-sized deal with Arvinas, the New Haven company launched to take advantage of the technique. (They have some of their own programs in development as well). I very much look forward to seeing what comes of all this.
And no, I don't mean the legal wrangling about whose technique is covered by whose patents. I want to see how these ideas perform in human patients. These ideas are pure chemical biology, using small molecules to make the machinery of living cells do something new for you, and it's a field that I have a great deal of interest in, affection towards, and hope for.
+ TrackBacks (0) | Category: Chemical Biology
Is the recent upturn in drug approvals the real thing? Years of overall decline would have to be overcome, but it would be good news if the industry has, in fact, picked back up again. A new article in Nature Reviews Drug Discovery"the ratio of the first 7 years of revenue for all innovative launches in a given year to the corresponding portion of R&D investment over the previous 7 years". So that clears out the fourth-in-a-category stuff, and also uses a reasonable time horizon (which is the big problem with declaring a trend based on only one or two years' worth of drug launch data).
This measure shows that innovative drug approvals reached a peak in the mid-to-late 1990s, followed by a pretty steady decline, bottoming out around 2009-2011. And yes, they do see a slight rise since then, which is good to know. But there are a few factors to keep in mind: (1) this rise still only takes things back to levels that would have been considered awful before 2003 - if this trend is real, it's still got a ways to go. And (2), a good part of the rise in their chart is based on revenue forecasts. If you regard these as over-optimistic and tone them down, there's still a small rise, but only back to, say, 2006 levels.
It's also worth noting the authors' beliefs about where this increased productivity is coming from:
We estimate that ~45% of the productivity improvement comes from slowing growth of, and increasing rationalization in, pharmaceutical R&D spending, and that the remaining ~55% comes from the higher total revenue that is expected from new launches. However, we believe that the more significant driver is the slower growth in spending, as 70–90% of the revenue impact on productivity can be attributed to forecast exuberance, and only 10–30% to the actual increase in the number and quality of launches.
There's the problem with relying on forecast revenues - that 45/55 split is if things go perfectly on the 55 (revenue) side. To be fair, the cost-cutting they're talking about is not all just ax-the-R&D-department stuff. It can also include lower expenses from any increase in clinical success rates (the jury is still out on those). And the authors also make the point that individual therapeutic areas often drive overall figures like these - for example, some of the recent rise is due to the huge successes in anti-infectives, which largely means HCV.
So overall, "cautious optimism" is probably warranted. If the recent clinical successes in oncology translate as they should, the industry may well see the sort of turnaround that can't be explained away.
+ TrackBacks (0) | Category: Business and Markets | Drug Industry History | Regulatory Affairs
June 15, 2015
The Fujita group has a new paper out on their "crystalline sponge" method for X-ray structure determination. I had the chance to hear Prof. Fujita speak on this recently - a very interesting talk indeed - and I've been looking forward to the paper. (Here's a summary from an equally impressed Quintus).
This time, they're looking at oxidation reactions of the cyclic terpene humulene. That compound, as a glance at its structure will indicate, is not a crystalline solid, but it soaks into the zinc-containing metal-organic framework just fine. And so do all the epoxide and aldehyde oxidation products when the starting material is treated with MCPBA or selenium dioxide. These structures would not be a lot of fun to figure out from NMR data alone, but the X-ray sponge technique nails them all down.
During his seminar, Fujita said that it's been his experience that if a parent structure fits will into the metal-organic frameworks, that derivatives of it tend to have much higher success rates as well, and that seems to have been the case here. (The overall success rate, from all sorts of structures, is lower). But I'm not surprised that it doesn't work every time - what continues to surprise and delight me is that it works as well as it does.
One source of tension during the advent of this technique has been the crystallographic rigor of the results. Crystallographers are nothing if not rigorous with their data - if you're not, you can get lost so many ways that it'll make your head spin. (That especially goes for protein/ligand crystals, which are very rarely obtained with the resolution of small molecules). The "crystalline sponge" technique is sort of in-between: protein crystallographers look at it and say "Hmm, not so bad", while small-molecule crystallographers are more doubtful, especially when they see use of solvent-exclusion routines and other data-cleanup techniques. But the recent guidelines proposed by the Clardy group have probably helped in that regard.
I'm no crystallographer myself, but looking over the Supporting Information file for this paper, I note that it details when crystallographic restraints had to be used during the various structure refinements, which is welcome. You can also see that the group did a lot of refinement of the crystal soaking conditions for each compound, which is something that Fujita emphasized during his seminar as well. Interestingly, in one of the complexes, an impurity showed up also ordered in the crystalline framework, which is probably a phthalate plasticizer (leached out of labware). There's a possibility that it's a disordered nitrobenzene, a solvent used to produce the crystals initially, but an infrared spectrum of the crystals showed CO bands. (Clardy's procedure for making the crystals dispenses with the nitrobenzene entirely, so it'll be worth watching to see if phtalates show up from that recipe once in a while, too).
+ TrackBacks (0) | Category: Analytical Chemistry
When a drug company's New Drug Application runs into some sort of major issue, the FDA sends what's called a CRL, a Complete Response Letter. This details why the application was rejected and what can be done to fix the problem. And all this is, naturally of great interest to the company's investors.
But they can go mow their lawns. Most of the time, that's going to provide as much information as the company will divulge on its own. That's what I took away from this study in the BMJ (here's a summary at FierceBiotech). The authors (who are with the FDA) looked at the full text of dozens of CRLs (from 2008 to 2013) and compared what's in them to the press releases at the time. What results is a sort of regulatory Rashomon.
18% of the time, the companies issued no press release at all, although you would have to characterize a CRL as a "material event", wouldn't you? Another 21% of the time, though, press releases "did not match any statements from the letters", so that's the next best thing to not issuing anything at all. 32 of the CRLs from this period called for new clinical trials to be conducted, but 40% of the press releases never got around to mentioning that. And of the 7 CRLs that brought up a higher mortality in the drug treatment group, only one of the corresponding press releases noted that fact. In none of the 61 cases did the company involved release the complete text of the CRL.
So, to quote that noted philosopher Bullwinkle, if you're waiting for the companies to finish the story, you're going to have a long wait. There have been calls over the years to make CRLs public, and this study shows why that's not a bad idea. Since the authors themselves are from the FDA, and note that this proposal was recommended by the agency's own transparency task force in 2009.
At the very least, a summary of the CRL could be released by the agency in its own words - something that wouldn't disclose any proprietary details, but would at least tell everyone what's going on. Does the FDA need anyone else's permission to do this? The authors say that releasing the full texts "would likely require a change in FDA’s regulations". (Does that make it a matter for Congress to take up?) Someone needs to, because the current system is just not right.
+ TrackBacks (0) | Category: Business and Markets | Regulatory Affairs
June 12, 2015
I'm traveling today, so no time to put up a long post. I did want to call people's attention to this one by Milkshake, though, about optimizing reactions. I'll bet more people would experience this if more people had the nerve, the energy, or the wit to go back and run the control reaction the way he did. I'm know I've cut that particular corner many times myself. I think, though, he's going to have to shorten his technique's name to "BBO", since "blunder-based optimization" may not catch on.
And See Arr Oh points out a paper that uses silicone oil as a solvent, which I freely admit I left off my list the other day. Never tried that one, although I have ended up running a couple of reaction in it, without realizing it until I came back to the hood.
+ TrackBacks (0) | Category: Chemical News
June 11, 2015
Here's the latest RNAi dustup: Alnylam has filed a "trade secret misappropriation" lawsuit against Dicerna (thanks to a commenter here for mentioning this news). The real issue seems to be use of GalNAc conjugates for delivering siRNAs to the liver. Alnylam's president, Barry Greene says that "Alnylam has led the discovery and development of GalNAc conjugate technology for delivery of RNA therapeutics, including through technology we obtained from our $175 million Sirna Therapeutics acquisition in our 2014 transaction with Merck."
That's where it'll get complicated. The RNAi Blog notes this lawsuit is claiming that Dicerna hired some of Merck's laid-off RNA people after that 2014 deal with Alnylam, which is when Merck finally wrote off the last of their earlier investment in Sirna.
So this all may come down to what the Merck people knew when they went to Dicerna, whether they were bound by noncompete agreements or not, and what the legal status of the GalNac technology was at the time. Keep in mind that trade secrets are very different things than patents. If a trade secret holder did not take "reasonable protective measures", there may not be room for a misappropriation claim. (New Jersey has adopted a law based on the Uniform Trade Secrets Act, mentioned in that article, but although Massachusetts has not, its laws do have similar provisions on protection). We may not get to find out all the details - I'm guessing that this is one of those cases that's going to end in some sort of settlement - but this situation is worth keeping an eye on.
+ TrackBacks (0) | Category: Business and Markets | Patents and IP
June 10, 2015
Hmm. FierceBiotech is reporting that AstraZeneca's head of R&D, Briggs Morrison, has made a very sudden, very unexpected exit. People seem to have heard about this only this morning, and are being taken by surprise. . .
+ TrackBacks (0) | Category: Business and Markets
Potential trouble: thoughts of a link between cardiac birth defects and the antidepressant Zoloft (sertraline). Pfizer recently won such a case in Missouri, but the latest trial seems to have produced some internal documents that might lead to a different verdict. Since this is all in the context of lawsuits, the signal/noise (for an outside observer) is very, very poor, on both sides. But it's worth keeping an eye on.
+ TrackBacks (0) | Category: The Central Nervous System | Toxicology
So an FDA advisory committee met yesterday to consider the PCSK9 antibody from Regeneron and Sanofi, and today it's the turn of Amgen's candidate. These, as anyone with even a passing interest in cardiovascular medicine will know, are potential gigantic blockbusters and advances in the field, promising to lower LDL across a huge swath of the patient population. The question, though, is clinical outcomes - the big question is always clinical outcomes. Does lowering LDL via this mechanism lead to lower CV mortality, fewer heart attacks, etc.? By now, we have such data for statin therapy, and it does indeed seem to provide those benefits. Those numbers take years to generate, though, and although these studies are underway, the various PCSK9 developers would much rather not wait another two or three years for them to report, in much the same way that you or I would rather not spend that time dangling by our feet. So everyone is looking for a way to get these drugs onto the market.
Sanofi and Regeneron are looking to target patients who have a genetic predisposition to high LDL levels, and a larger group who are intolerant to statin therapy. The FDA committee was fine with the first plan, but not wildly enthusiastic about the second one. One worry they seemed to have was sending a "ditch your statins" message, which they don't want to do.
A whole generation of well-known statins, now cheap and easily available, has emerged with proven cardiovascular benefits for patients. Other LDL drugs, though, turned out to be controversial failures on CV outcomes. The FDA is fretting that an approval based on the LDL surrogate leaves it with no authority to demand a cardiovascular study. And regulators are alarmed over the impact these new drugs could have on statins, even though there are no proven CV advantages.
That's from this FierceBiotech piece, written before the committee meeting. Here's their take afterwards. Matthew Herper has an analyst who thinks that many statin-intolerant patients will go onto the new therapies, no matter what the label might way. We'll see. No PCSK9 antibody is going to reach its full market potential until the outcomes data arrive - that much has been clear for a long time. But now the companies (and their investors) are wondering about what they'll be able to realize before that happens.
+ TrackBacks (0) | Category: Cardiovascular Disease | Regulatory Affairs
There's been a lot written about nonreproducibility in biomedical research - it's a topic that everyone with experience in the field can relate to. Now here's a paper that suggests that the cost of all these nonreproducible papers could be as much as $28 billion per year.
For this paper, we adopt an inclusive definition of irreproducibility that encompasses the existence and propagation of one or more errors, flaws, inadequacies, or omissions (collectively referred to as errors) that prevent replication of results. Clearly, perfect reproducibility across all preclinical research is neither possible nor desirable. Attempting to achieve total reproducibility would dramatically increase the cost of such studies and radically curb their volume. Our assumption that current irreproducibility rates exceed a theoretically (and perhaps indeterminable) optimal level is based on the tremendous gap between the conventional 5% false positive rate (i.e., statistical significance level of 0.05) and the estimates reported below and elsewhere . . .
The authors believe (reasonably) that four types of trouble contribute the most to the problem: (1) problems with overall study design, (2) incorrect biological reagents and/or reference materials, (3) poorly thought-out or poorly reported laboratory protocols, and (4) problems with data analysis and reporting. Their estimate is that the percentage of non-reproducible work out there can't be any lower than about 18%, and if you were to assume worst-case for everything, it would be as high as 88%. The middle of their probability distribution is still 53%. Yep, according to these estimates, more than half of the literature is not reproducible.
That seems high to me, and I'm fairly cynical about the literature. But they've based this estimate on the many publications that have addressed problems with antibodies, with study designs, with contaminated cell lines, and many other factors, and if you start adding all these things up, the numbers get pretty alarming. (Perhaps they could be brought down a bit by assuming that some of these papers have screwed up in more than one way? I admit that this is an odd way to look for solace). The authors do admit that the literature on this subject is not as large (or as rigorously defined) as it should be to make a hard estimate of these things, but maintain that there's little doubt that the problem is real, and large enough to worry about.
Getting the $28 billion dollar figure is fairly straightforward:
Extrapolating from 2012 data, an estimated US$114.8B in the United States is spent annually on life sciences research, with the pharmaceutical industry being the largest funder at 61.8%, followed by the federal government (31.5%), nonprofits (3.8%), and academia (3.0%). Of this amount, an estimated US$56.4B (49%) is spent on preclinical research, with government sources providing the majority of funding (roughly US$38B). Using a conservative cumulative irreproducibility rate of 50% means that approximately US$28B/year is spent on research that cannot be replicated (see Fig 2 and S2 Dataset). Of course, uncertainty remains about the precise magnitude of the direct economic costs—the conservative probability bounds approach reported above suggest that these costs could plausibly be much smaller or much larger than US$28B. Nevertheless, we believe a 50% irreproducibility rate, leading to direct costs of approximately US$28B/year, provides a reasonable starting point for further debate.
Even if that 50% estimate is wrong, and it had better be, the figures are still very large. For comparison, that $28 billion dollar figure is about 90% of the budget allocated to the entire NIH, so that's real money. (Note - I'm not saying that 90% of the NIH budget is wasted! I'm saying that if this paper is correct, an amount roughly equivalent to that is wasted by all parties every year, an alarming thought).
There's room for pessimism. Consider that this analysis takes a linear approach, just multiplying out the percentages. But research isn't linear like that. Some irreproducible papers will have a much greater impact, and waste vast amount of cash beyond that of the original research group. But balancing that out, there are many "silently irreproducible" ones, published on not-very-interesting topics in not-very-interesting journals, that lead to no real followup. That $28 billion figure, though, is just accounting for the latter case, the direct costs. The follow-on costs of trying to build on work that isn't right haven't been factored in. So even if the real irreproducibility rate is, say, 25%, that would lead to direct costs of $14 billion, but real costs would surely still be higher than that. Not good at all.
Update: here's a piece at Science's new site on this study. Second update: Nature News as well.
+ TrackBacks (0) | Category: The Scientific Literature
In 2012, Roche halted development of their CETP inhibitor dalcetrapib, part of what (so far) has been a grim chapter in cardiovascular drug development. At the time, this was put down to lack of efficacy, rather than the bad effects seen with Pfizer's torcetrapib, news that did nothing to cheer up the CETP research community.
Now DalCor (of Montreal) has licensed the compound, and is planning to go back into the clinic. What makes them think that they'll have any better time of it? The plan is to restrict the clinical trial (and the eventual patient population) to a group that appears to be genetically predisposed to respond. An analysis of the trial data suggests that polymorphisms in the ADCY9 gene (adenylate cyclase 9) could account for large variations in the effects of dalcetrapib. DalCor and the Montreal Heart Institute plan to screen about 30,000 potential patients to find enough of them with the right profile.
What's not clear to me is how many potential patients (and customers) this polymorphism leaves you with. DalCor has apparently run the numbers, and believes that there's potential here but (naturally) this also all depends on that analysis from the previous trials being solid. Dalcetrapib will never be the blockbuster that Roche hoped for some years ago, but just turning it into a drug at all would (by this point) make for quite a story.
+ TrackBacks (0) | Category: Cardiovascular Disease
June 9, 2015
While we're on the subject of market froth, take a look at what's going on today. Sage Therapeutics announced the results of a study in post-partum depression. (Note - these are not the same people as Sage Bionetworks out in Seattle; this is a small company in Cambridge). They used SAGE-547, a GABA-A allosteric modulator under development for epilepsy, and the treatment group seems to have shown a rapid improvement in their symptoms after a single dose. Actually, I shouldn't say "treatment group", because there was no placebo group. And there were four patients.
So I think you have to be of two minds about this. As that Forbes link shows, Sage is trying an interesting approach to CNS drug development - going after severe, almost completely unserved indications to see if they can get anything to happen at all. (SAGE-547 is being developed for severe status epilepticus, (in its worst form) basically continuous seizures, for which the only real treatment is medically-induced coma and cross your fingers). Update: to clarify, this is targeted at the most severe drug-resistant cases. This is a perfectly legitimate approach, and I actually have to applaud the company for being willing to charge into such difficult therapeutic areas.
But that doesn't mean that their stock should be jumping like a startled wood frog, which is what seems to be happening in the premarket right now. A four-patient open-label study in depression is not something to go wild about. To my mind, the proper response is "Gosh, that could be quite interesting. Go run a much larger placebo-controlled trial." That's really important in this area, because placebo responses in the entire antidepressant field are notoriously high, and can be difficult to overcome. I have no idea what a typical placebo response is in post-partum depression patients, but I would very much want to see those numbers before getting too excited.
To be honest, Sage probably shouldn't have press-released this study at all. I'm not running a small company - good thing, that - so it's easy for me to say that. But this is so small and so preliminary. The way this biopharma market is these days, it's just a big spray of gasoline on the bonfire - a sudden whoosh, a burst of light, and it's gone. It's a long way from a well-built heating system.
Update: the stock of another company with a GABA-based epilepsy candidate, Marinus Pharmaceuticals (MRNS), is going berserk today as well.
+ TrackBacks (0) | Category: Business and Markets | The Central Nervous System
Update: the IPO went off at the top of its range, I am sorry to report. More here from FierceBiotech, and I agree with John Carroll's take.
I wrote just recently about Axovant and their plans to go public with an Alzheimer's therapy picked up on the cheap from GSK. Now here's a look from Adam Feuerstein at TheStreet.com at the whole situation, and it reeks even more than I'd thought.
20% of the company is being sold to the public, in an offering that's recently been scaled up to $250 million. And it's one of those friendly, welcoming deals, if you run in the right circles:
You with me so far? Hedge fund guy forms a company and subsidiary to buy an old Alzheimer's drug Glaxo didn't seem to want for $5 million. Six months later and without doing any clinical development at all, hedge fund guy sets terms for IPO of shell subsidiary which values the same old Alzheimer's drug at well over $1 billion.
It gets better. Perhaps sensing reluctance from outside investors to buy a minority stake in an old Alzheimer's drug Glaxo seemingly gave away for almost nothing, Ramaswamy gets two more hedge funds -- RA Capital and Visium Asset Management -- to "indicate an interest" in buying shares in the Axovant IPO valued at up to $150 million.
As inducement for their interest in the Axovant IPO, RA Capital and Visium are allowed to sell their shares (if they buy) after 90 days. The customary lock-up period for insiders in an IPO is 180 days.
The biotech bull market is a wonderful thing. . .
Indeed it is, if you're on the nice side of it. But if this IPO goes off well this week, I'm going to have to take it as a sign of undeniable craziness in the market. There really seems to be no reason for Axovant to be going public at this time and under these terms, other than the fact that there's a horde of people outside its door, jumping up and down and waving fistfuls of money. If that's your idea of a sustainable market, or if it just sounds like a fun time, then go ahead, I guess. Caveat bug-eyed, clueless emptor and all that.
+ TrackBacks (0) | Category: Alzheimer's Disease | Business and Markets
June 8, 2015
So here's an embarrassing moment: the NIH's own lab to produce clinical drug samples has been shut down by the FDA.
The disclosure comes amid stepped up enforcement by the FDA of manufacturing plants and, in particular, compounding facilities. Some of this activity came in response to an outbreak of fungal meningitis in 2012 at the New England Compounding Center, which led to 64 deaths and was described by officials as the worst public health crisis in the U.S. in decades.
Now, the NIH is feeling the same heat for having made some of the same mistakes as drug makers and compounders. The FDA report notes that the NIH facility was not designed to prevent contamination risks to sterile drugs, there were flaws seen in the ventilation system and there was inadequate quality control. . .
This is apparently affecting a number of NIH-sponsored trials - I'd be quite interested in knowing which drugs were being produced/formulated at this facility, though. Doing this correctly is simultaneously (1) a well-studied process, with clear guidelines and standards, and (2) not as easy as you might think. There are a lot of ways that contamination can slip in, and you have to be ridiculously picky and over-the-top all the time if you're going to do it safely.
+ TrackBacks (0) | Category: Clinical Trials
Earlier this year, the Nature Group publication Scientific Reports tried an experiment for a month with "http://blogs.nature.com/ofschemesandmemes/2015/04/21/fast-track-peer-review-experiment-first-findings">fast-track peer review". This meant that you could pay extra to have your manuscript reviewed more expeditiously - I was going to say "more quickly", but that makes it sound a bit too quick, as in "Yeah, OK, why not".
The speed-up was accomplished by a third party (Research Square) that pays reviewers for quality work done quickly, as opposed to the usual system, which (as far as I can see) runs on persistent editorial nagging and reviewer guilt. (It's not clear to me, a priori, which of these two regimes is more desirable, but I'd certainly be willing to give the incentive scheme a shot, because I'm a pretty poor fit for the regular method). The journal gave this a one-month trial, with the number of speedy review papers capped at a certain level, to see how things worked.
Here's the report. The editors acknowledge that this system opened the way for possible new abuses (but I'd add that changing any system full of human behavior opens the way for possible new abuses). A relatively small percentage of people asked for the new service, but there were takers. Geographically, they say that the highest percentage came from China, but I'd like to see that normalized according to the usual Scientific Reports manuscript distribution before drawing any conclusions.
Update: it appears that there was an attempt at a mass resignation from the journal's editorial board over this issue.
They've ended the experimental period, but are reserving the right to try this again. I think that most scientists can agree that the current peer-review system is probably not the best of all possible worlds - Doctor Pangloss never had to deal with the dreaded Third Reviewer, especially when the whole process took about six months. Whether there's anything better, though, has been a subject of much debate. I definitely appreciate NPG's willingness to experiment, and I wish that other scientific publishers were doing the same.
+ TrackBacks (0) | Category: The Scientific Literature
June 5, 2015
A new paper in PLoS Computational Biology is getting a lot of attention (which event, while not trying to be snarky about it, is not something that happens every day). Here's the press release, which I can guarantee that most of the articles written about this work will be based on. That's because the paper itself becomes heavy going after a bit - the authors (from Tufts) have applied machine learning to the various biochemical pathways involved in flatworm regeneration.
That in itself sounds somewhat interesting, but not likely to attract the attention of the newspapers. But here's the claim being made for it:
An artificial intelligence system has for the first time reverse-engineered the regeneration mechanism of planaria--the small worms whose extraordinary power to regrow body parts has made them a research model in human regenerative medicine.
The discovery by Tufts University biologists presents the first model of regeneration discovered by a non-human intelligence and the first comprehensive model of planarian regeneration, which had eluded human scientists for over 100 years.
The "100 years" part is hyperbole, because it's not like people have been doing a detailed mechanistic search for that amount of time. Biology wasn't up to the job, as the earlier biologists well knew. But is the artificial intelligence part hyperbole, or not? As the many enzymes and other proteins involved in planarians have been worked out, it has definitely been a challenge to figure out what's doing what to what else for which reasons, and when. (That's the shortest description of pathway elucidation that I can come up with!) The questions about this work are (1) is the model proposed correct (or at least plausibly correct)? (2) Was it truly worked out by a computational process? And (3) does this process rise to the level of "artificial intelligence"?
We'll take those in order. I'm actually willing to stipulate the first point, pending the planarian people. There are a lot of researchers in the regeneration field who will be able to render a more meaningful opinion than mine, and I'll wait for them to weigh in. I can look at the proposed pathways and say things like "Yeah, beta-catenin would probably have to be involved, damn thing is everywhere. . .yeah, don't see how you can leave Wnt out of it. . ." and other such useful comments, but that doesn't help us much.
What about the second point? What the authors have done is apply evolutionary algorithms to a modeled version of the various pathways involved, and let it rip, rearranging and tweaking the orders and relationships until it recapitulates the experimental data. It is interesting that this process didn't spit out a wooly Ptolemaic scheme full of epicycles and special pleading, but rather a reasonably streamlined account of what could be going on. The former is always what you have to guard against with machine-learning systems - overfitting. You can make any model work if you're willing to accept sufficient wheels within wheels, but at some point you have to wonder if you're optimizing towards reality.
How close is the proposed scheme to what people already might have been thinking (or might have already proposed themselves?) In other words, did we need a ghost come from the grave to tell us this? I am not up on the planarian stem-cell literature, but my impression is that this new model really is more comprehensive than anything that's been proposed before. It provides testable hypotheses. For example, it interprets the results of some experiments as inferring the existence of (yet unknown) regulatory molecules and genes. (The authors present candidates for two of these, and I would guess that experimental evidence in this area will be coming soon).
It's also important to note, as the authors do, that this model is not comprehensive. It only takes into account 2-D morphology, and has nothing to say about (for example) the arrangement of planarian internal organs. This, though, seems to be a matter of degree, only - if you're willing to collect more data, code it up, and run the model for longer after doing some more coding on it, its successor should presumably be able to deal with this sort of thing.
And that brings us to point three: is this a discovery made via artificial intelligence? Here we get into the sticky swamp of defining intelligence, there to recognize the artificial variety. The arguments here have not ceased, and probably won't cease until an AI hosts its own late-night talk show. Is the Siri software artificial intelligence? Are the directions you get from Google Maps? A search done through the chemical literature on SciFinder or the like? An earlier age would have probably answered "yes" (and an even earlier age would have fled in terror) but we've become more used to this sort of thing.
I think that one big problem in this area is that the word "intelligence" is often taken (consciously or not) to mean "human intelligence". That doesn't have to be true, but it does move the argument to whether border collies or African grey parrots demonstrate intelligence. (Personally, I think they do, just at a lower level and in different ways than humans). Is Google Maps as smart, in its own field, as a border collie? As a hamster? As a fire ant, or a planarian? Tough question, and part of the toughness is that we expect intelligence to be able to handle more than one particular problem. Ants are very good at what they do, but they seem to me clearly to be bundles of algorithms, and is a computer program any different, fundamentally? (Is a border collie merely a larger bundle of more complex algorithms? Are we? I will defer discussion of this disturbing question, because I see no way to answer it).
One of the hardest parts of the work in this current paper, I think, was the formalization step, where the existing phenomena from the experimental literature were coded into a computable framework. Now that took intelligence. Designing all the experiments (decades worth) that went into this hopper took quite a bit of it, too. Banging through it all, though, to come up with a model that fit the data, tweaking and prodding and adjusting and starting all over when it didn't work - which is what the evolutionary algorithms did - takes something else: inhuman patience and focus. That's what computers are really good at, relentless grinding. I can't call it intelligence, and I can call it artificial intelligence only in the sense that an inflatable palm is an artificial tree. I realize that we do have to call it something, though, but the term "artificial intelligence" probably confuses more than it illuminates.
+ TrackBacks (0) | Category: Biological News | In Silico | Who Discovers and Why