G Model

BCP-11799; No. of Pages 22 Biochemical Pharmacology xxx (2013) xxx–xxx

Contents lists available at ScienceDirect

Biochemical Pharmacology journal homepage: www.elsevier.com/locate/biochempharm

Review

Translational paradigms in pharmacology and drug discovery Kevin Mullane a,*, Raymond J. Winquist b, Michael Williams c a

Profectus Pharma Consulting Inc., San Jose, CA, United States Department of Pharmacology, Vertex Pharmaceuticals Inc., Cambridge, MA, United States c Department of Molecular Pharmacology and Biological Chemistry, Feinberg School of Medicine, Northwestern University, Chicago, IL, United States b

A R T I C L E I N F O

A B S T R A C T

Article history: Received 16 October 2013 Accepted 16 October 2013 Available online xxx

The translational sciences represent the core element in enabling and utilizing the output from the biomedical sciences and to improving drug discovery metrics by reducing the attrition rate as compounds move from preclinical research to clinical proof of concept. Key to understanding the basis of disease causality and to developing therapeutics is an ability to accurately diagnose the disease and to identify and develop safe and effective therapeutics for its treatment. The former requires validated biomarkers and the latter, qualified targets. Progress has been hampered by semantic issues, specifically those that define the end product, and by scientific issues that include data reliability, an overt reductionistic cultural focus and a lack of hierarchically integrated data gathering and systematic analysis. A necessary framework for these activities is represented by the discipline of pharmacology, efforts and training in which require recognition and revitalization. ß 2013 Elsevier Inc. All rights reserved.

Keywords: Pharmacology Translational science Drug discovery Bias Biomarkers

Contents 1. 2. 3. 4.

5.

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Biomedical research funding in the decade of the 21st century . . . . . . . . . . . . . . . . . . . . . . . . . Output from biomedical research activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Challenges in effective translational science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reductionism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1. Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2. Experimental bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1. Ignorance bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2. 4.2.3. Bias by misrepresentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bias by pressure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.4. Literature reporting biases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.5. Entrepreneurial/biotech bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.6. Fundamentals of the translational process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Genome Wide Association Study (GWAS)/Next Generation Sequencing (NGS) . . . . . . . . 5.1. Difficulties in interpretation of GWAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1. Missing heritability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2. Gene–gene interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.3. Gene identification is only the beginning, not the goal . . . . . . . . . . . . . . . . . . . 5.1.4. Assigning relevance by computer algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.5. Gene hunting and the one that got away – function . . . . . . . . . . . . . . . . . . . . . 5.1.6. ‘‘Next Generation’’ Sequencing (NGS) technologies . . . . . . . . . . . . . . . . . . . . . . 5.1.7. RNA sequencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.8. Statistical analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.9. 5.1.10. Interpreting genotyping studies: application to autoimmune disorders (AIDs) 5.1.11. Epigenetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000

* Corresponding author. E-mail address: [email protected] (K. Mullane). 0006-2952/$ – see front matter ß 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.bcp.2013.10.019

Please cite this article in press as: Mullane K, et al. Translational paradigms in pharmacology and drug discovery. Biochem Pharmacol (2013), http://dx.doi.org/10.1016/j.bcp.2013.10.019

G Model

BCP-11799; No. of Pages 22 2

K. Mullane et al. / Biochemical Pharmacology xxx (2013) xxx–xxx

6.

7.

8.

5.1.12. Gene hunting: hope or hubris? . . . . . . . . . . . . . . . . . 5.1.13. PheWAS – Phenome-Wide Association Studies . . . . Target validation/qualification . . . . . . . . . . . . . . . . . . . . . . . . . 5.2. Target-based versus phenotypic screening approaches. . . . . . 5.3. Natural products as a basis for phenotypic discovery efforts . 5.4. Animal models and their predictive value. . . . . . . . . . . . . . . . 5.5. Hierarchy in advancing targets and therapeutics . . . . . . . . . . . . . . . . Stroke . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1. 6.2. Additional therapeutic areas . . . . . . . . . . . . . . . . . . . . . . . . . . Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3. Following the clinical path . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4. 6.5. Pluripotent stem cells as disease models. . . . . . . . . . . . . . . . . Revisiting translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alzheimer’s biomarkers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1. Translatability scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2. 7.3. Clopidogrel – bidirectional translation . . . . . . . . . . . . . . . . . . NXY-059 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4. Statin translatability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5. Future directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1. Introduction The goals of the biomedical research enterprise that is generally associated with Vannevar Bush’s post-WWII, ‘‘Endless Frontier’’ of science initiative [1] are two-fold. Firstly to understand human disease causality, its progression and prognosis; and, secondly, to identify safe and effective therapeutics that can ameliorate disease by restoring normal tissue function. Accordingly, research in the absence of any considered intent to provide tangible benefit to society cannot be justified [2]. To efficiently and productively execute the goals of biomedical research requires a series of hierarchical translational frameworks that can provide necessary structure to focus and prioritize research activities in order to make informed decisions. At the preclinical level, this occurs by using data from investigating disease causality, to identifying targets and beginning the process of their validation, through animal testing. The aggregate data derived is then used at the clinical interface to transition drug-like new chemical entities (NCEs) to the clinical trial process, a process termed T1 translational medicine [3,4]. This involves considerations of NCE efficacy, selectivity and safety together with animal PK/PD (pharmacokinetic/pharmacodynamic) properties that facilitate the design of Phase I safety and Phase II proof of concept trials as well as predicting human exposure. The final translational processes is that of moving approved therapeutics into clinical practice and health care decision-making to enhance the adoption of ‘‘best practices’’ within the community [5], the T2 translational medicine process [4]. Of the three translational processes, that termed T1, is probably the most controversial as it is thought to be the weakest link and is a key factor in the Phase II attrition rate [6]. However, none of these processes should be unidirectional, since the clinical studies can be invaluable in informing the research component e.g., defining the appropriate phenotype and/or relevant genotype, identifying active metabolites in human tissues while ‘‘best practices’’ aid in understanding what type of therapeutic is acceptable in the marketplace. From a preclinical research perspective, The American Society of Pharmacology and Experimental Therapeutics (ASPET), has defined T1 translational research in terms of developing ‘‘methods and systems to integrate molecular, cellular, tissue, organ and clinical information so that the response to experimental therapeutics in model disease systems and patients is fully understood’’ [7] in essence describing the discipline of applied pharmacology [8]. The present article, in using this research-centric definition, focuses on the many challenges, and some potential solutions to improving T1

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000

translation and making it the more effective component of the biomedical research enterprise that was envisaged in its conceptualization as part of the ‘‘Endless Frontier’’ [1,4,9]. 2. Biomedical research funding in the decade of the 21st century In 2013, the biomedical research enterprise, federal, academic and industrial in the US, is anticipated to spend more than $220 billion on research, both preclinical and clinical, with the majority of the funding coming from the biopharmaceutical industry [9]. While the precise split between basic/preclinical research and preclinical development and clinical trials is generally a moving target depending on the organization, these resources will support many hundreds of thousands of experiments that include: chemical synthesis [10–12]; target [13–16] and biomarker [17] selection and validation; assessment of target engagement and function [18,19]; animal disease model testing [20]; various preclinical development activities that include absorption, distribution, metabolism and excretion (ADME [21]), the formulation and scale up of new chemical entities (NCEs), early stage toxicology and safety pharmacology [22,23] and, ultimately, clinical trials [24,25]. The $220 billion to be spent on biomedical research in 2013 comes from a variety of sources, in addition to the biopharmaceutical industry [9] and includes taxpayers via federal government allocations, philanthropic organizations like the Juvenile Diabetes Research Foundation (JDRF), the Huntington’s Disease Foundation (HDF) and the Multiple Myeloma Research Foundation (MMRF) and from investors both in stock markets and venture capital. As noted [2], the principal outcome that is anticipated from these investments is an improvement in societal health. Ancillary outcomes that are often likely to precede tangible improvements in health care, and are also necessary for these to occur, involve the economic contributions to national competitiveness in knowledge-based economies with the financial benefits to society reflected in public funding to universities and research institutes to support research and to ‘‘for profit’’ organizations and investors with concomitant collateral economic benefits to local communities [26,27]. Thus the NIH and the FDA are vital to the economy in the Bethesda/Rockville MD area with universities, pharma and biotech being critical to the local economies in Boston/Cambridge, San Diego, San Francisco, Research Triangle Park, Medicon Valley, Rehovot, Cambridge UK, etc. [28]. Witness the impact on the local economy when Pfizer sequentially closed research operations in

Please cite this article in press as: Mullane K, et al. Translational paradigms in pharmacology and drug discovery. Biochem Pharmacol (2013), http://dx.doi.org/10.1016/j.bcp.2013.10.019

G Model

BCP-11799; No. of Pages 22 K. Mullane et al. / Biochemical Pharmacology xxx (2013) xxx–xxx

Ann Arbor (Pharmacia-Upjohn Warner Lambert-Parke Davis), Princeton (Wyeth-Ayerst) and Sandwich, Kent (Pfizer) – as it routinely reorganized as a response to continued shortfalls in its product pipeline [29]. 3. Output from biomedical research activities Irrespective of the originating scientific discipline or context, the data generated from hierarchical preclinical research activities are key in understanding the etiology of human disease states, the various targets and pathways involved in tissue homeostasis, dysfunction and pathophysiology, and the effects of known drugs and NCEs on their function. Additionally, for the latter, the objective and transparent prioritization and integration of the data generated together with any additional experimentation necessary to ‘‘fill in the gaps’’ represents the preclinical data portfolio to support an IND (Investigational New Drug Application; FDA) or CTA (Clinical Trial Application; EMA), the document(s) that support the transition of an NCE from its preclinical lead compound status to a clinical candidate that are used iteratively in the back translation of clinical findings to the preclinical setting [8,30]. Research activities involve a succession of experimentally derived data sets that are based on the premise that the preceding data is a product of objective hypothesis testing and informed, transparent and documented data interpretation such that the resultant conclusions that are shared with other researchers via the peer-reviewed scientific literature involve the unbiased and complete set of information obtained. On the subject of science and knowledge (e.g., data generation) Menand [31] has noted that ‘The pursuit, production, dissemination, application, and preservation of knowledge are the central activities of a civilization. . .[with it being]. . . important for research and teaching to be relevant’ and, most importantly, put to use. Crichton has also noted that – ‘‘In science consensus is irrelevant. What is relevant is reproducible results’’[32]. In the event of subsequent disconnects in published findings or new, unexpected findings, it is however assumed that the initial data can be retrieved in its raw form and reanalyzed with appropriate insights and context for reconciliation [33]. Thus the peer-reviewed literature, grant reviews, regulatory authority reports and data dissemination at scientific meetings have traditionally served as interactive mechanisms intended to ensure the robustness and transparency of experimental data in addition to providing historical venues for clarification and mentoring. It has become increasingly apparent that many preclinical experiments are not conducted in a manner appropriate to justify their conclusions [34]. As a result, they often cannot be reproduced either because of flawed assumptions and/or experimental design or bias in data analysis and reporting [35,36]. Replication differs from reproduction, with the former reflecting the technical stringency in repeating a specific experiment – irrespective of whether it is accurate in its outcomes – and the latter reflecting the fundamental accuracy of an experimental observation that can be recreated by others as to whether that replication is precise or not [37]. Other reasons for research findings that cannot be reproduced involve issues labeled as misconduct and include overt fraud involving data fabrication, manipulation and plagiarism [38–42] that has led to numerous and highly visible retractions [37,42,43] the occurrence of which appears to be increasing [43,44] and which can undermine the core credibility of research, especially in the biological sciences [45]. Implicit in reproducing an initial scientific finding - ‘‘don’t believe anything until it has been replicated’’ - is an increase in the likelihood of it being correct, or in a more emotional context – valid – resulting in the inflammatory connotations, true or false. This leads to the alternate conclusion that experiments that cannot be

3

reproduced are false, thus establishing an unacceptable and uncertain basis for additional studies. In addition to being inconsistent with the ‘‘substance of the discipline’’ [37] with a waste of both time and resources, irreproducible experiments lead to a false set of assumptions that can undermine translational success in both the basic sciences [40,41] and drug discovery research [34,35] and also in transitioning novel therapeutics from the laboratory to the clinic [46]. Examples of the former include fabricated and retracted data related to human somatic cell cloning [40] and serial fraud in human behavioral studies [39] while examples in the clinical arena include fabricated clinical studies in the treatment of post operative pain with COX2 inhibitors [47] and data sets linking MMR (measles, mumps, and rubella) vaccine to autism [48,49] which have had a major impact on medical practice. 4. Challenges in effective translational science Many factors can impact experimental reproducibility that in turn diminishes success in the translational process. Some of these are inherent in the biological sciences, e.g., biological noise and random fluctuation, while others reflect limitations in the inherent scope of the scientific approach that often go unappreciated, e.g., data selection and other biases, subjective or overt and abstract reductionism versus holistic integration. 4.1. Reductionism A major consequence of the molecular biology and personal computer revolutions that occurred in the late 20th century has been that the intellectual contribution in research has diminished, being replaced by high throughput methodologies, chemical (combinatorial and parallel) and biological, that while generating huge amounts of data in a facile manner has tended to remove the individual investigator from direct involvement with the raw data – with the latter being automatically captured in a spread sheet, calculated and collated before even being reviewed. A hypothetical example of the limitations of reductionism in understanding function in biological systems have been compared to assessing the function of a radio solely based on knowledge of what the individual parts do in isolation [50]. The likelihood of selecting those parts that are indispensable for function and reassembling them in the right relationship has a low probability of success, hypothetically comparable with inferring how the brain functions in memory consolidation based on understanding transporter dynamics in the blood brain barrier. Reductionism in the absence of context [8] often leads to disconnects with the reality of biomedical research [2] and has resulted in several pithy quotes on the process, e.g., ‘‘low input, high throughput, no output science’’ [51], ‘‘turn on the computer, turn off the brain’’ [52] and ‘‘spreadsheets are easy; science is hard’’ [53]. These quotes reflect a perception that there is a decreased ability on the part of a scientist to think beyond the interrogation of spreadsheets and data sets. Given that society is increasingly making decisions based on the digitized, aggregated opinions of others, a ‘positive-herding’ phenomenon [53], and an absence of an intimate and long standing involvement in data generation and its context, tends to reinforce an attitude of ‘‘group think’’ replete with Orwellian connotations. And while the ability to generate data has increased, it appears to have had minimal impact in improving success in drug discovery [55–57] with the cost of developing a new drug, an always contentious number [58,59], now approaching $5 billion [60]. This is, in part, a reflection of Eroom’s law [61] a recently proposed inversion of the well-known law in computing, Moore’s Law which states that the number of transistors in an integrated circuit doubles every two years. Conversely, in drug discovery, Eroom’s law states the cost of development of a new

Please cite this article in press as: Mullane K, et al. Translational paradigms in pharmacology and drug discovery. Biochem Pharmacol (2013), http://dx.doi.org/10.1016/j.bcp.2013.10.019

G Model

BCP-11799; No. of Pages 22 4

K. Mullane et al. / Biochemical Pharmacology xxx (2013) xxx–xxx

drug doubles every 9 years. While the actual numerical value of the timing factor is debatable depending on the source, the costs of producing an approved new drug continue to increase [59,60] with much of the preclinical cost attributable to the disconnects resulting from reductionism and bias [8,62]. 4.2. Bias Generic definitions of bias include a ‘‘prejudice in favor of or against one thing. . .compared with another’’ and ‘‘any process at any stage of inference that tends to produce results or conclusions that differ systematically from the truth’’ [63]. It usually appears with a multitude of qualifiers that indicate its pervasiveness. These include confirmation bias, a tendency to seek out information that confirms a bias while ignoring everything that contradicts it (a hallmark of some aspects of contemporary biomedical research), overconfidence bias [64], a major flaw in the decision making process where individuals believe they can see what others cannot and belief bias, where everyone but the biased individual is deemed as being susceptible to errors in thinking that approach the level of illogicality. In research, bias refers to situations where prejudice or selectivity can introduce a deviation/distortion in experimental outcomes that occurs at levels beyond chance. This is a growing problem that was less prominent before the advent of the Internet and perhaps the Public Library of Science (PLoS), which has become a major forum for the dissemination of meta-analyses related to bias [34,38,44] that was obscured by the sheer volume of data reported. In this context, the number of scientific articles published has doubled in the last decade, half a million of which, some 1400 a day, being in the biomedical research field [37]. Of these somewhere between 50% [37] and 80% [65] of publications apparently make little contribution to the advancement of science, going uncited and being described as ‘‘sit[ting] in a wasteland of silence, attracting no attention whatsoever’’ [65]. 4.2.1. Experimental bias This includes a variety of factors related to experimental design, execution and interpretation that can be further confounded by inherent biological variability or ‘‘noise’’. This can usually be identified by an absence or limited number of data replicates, a major shortcoming for instance – one among many – of Western blot-derived data sets [66,67]. An additional consideration in experimental design is the control of confounding factors that are not always obvious and can indirectly influence the experimental outcome. In some cases this reflects the natural variability inherent in biology that contributes to defining appropriate group sizes. Other confounding factors include physiological parameters beyond the target organ that can have a dramatic influence on the outcome. One example is the need to control blood pressure when evaluating compounds in models of heart attack, stroke, thrombosis or cognitive function. Misinterpretation can also result from the use of single, high doses of compounds that renders them non-selective [68], and, where appropriate, failing to consider the contributions of factors like age (that can lead to alterations in drug metabolism and pharmacodynamics), gender (and background hormonal variations) and factors in the ambient environment (including potential stressors such as light, noise, temperature, humidity), that can affect the functional readout. Chronobiological variation represents another major and often subtle contributor to experimental bias that is rarely considered other than to conduct a series of experiments at the same time on successive days often by chance rather than design due to the conformity of the work day. Alterations in circadian rhythm markedly affect metabolism [69], feeding [70] endocrine function

[71] and sleep [72] increasing the risk for metabolic syndrome disorders like obesity and Type-2 diabetes and inducing mood disorders and stress [73]. Circadian dysfunction is involved in a variety of human disease states that include obesity, diabetes, asthma, stress, cancer, depression, etc. that can be exacerbated by lack of sleep (shift work), or disruption of the circadian sleep-wake cycle (jet lag) [74,75]. Despite circadian rhythm and dysfunction having a major impact on experimental outcomes in drug discovery [76,77], neither is controlled for in experimental design the one exception being the phase-shifting of nocturnal rhythms in rats and primates to accommodate sleep studies in drug discovery for hypnotics and sleep promoting agents [78,79]. 4.2.1.1. Design bias. Design bias reflects critical features in experimental planning that include: the design of an experiment to support rather than refute a hypothesis; a lack of consideration of the null hypothesis [80] the latter posited as an expectation that measured parameters compared between two groups (e.g., control and treated) within an experiment do not differ [81]; failure to incorporate appropriate control and reference standards, the latter being gold standard drugs or research tools that are selective for the targets/pathways being interrogated; and a reliance on single data points (endpoint, time point or concentration/dose point). Of particular concern is the failure to perform experiments in a blinded and randomized fashion. This can result in 3.2- and 3.4-fold higher odds, respectively, of observing a statistically significant result when compared to studies that were conducted appropriately [82]. The risk of introducing bias into non-blinded experiments, even unintentionally, is self-evident, so it is disturbing to find that recent analyses of 290 animal studies [82] and of 271 publications [83] revealed that 86–89% were not blinded. It is critical that an investigator involved in data collection and analysis is unaware of the treatment schedule. How an outlier is defined and to be handled (e.g., dropped from the analysis), or what sub-groups should to be considered, must be established a priori and accomplished before the study is un-blinded. 4.2.2. Ignorance bias Bias as the result of ignorance can be as simple as not knowing which statistical test is appropriate to use with a particular dataset [84]. It can often lead to a trial and error process facilitated by the use of spreadsheets and statistical programs for desk-top computing, where multiple tests can be iteratively applied until one apparently shows that the data set of interest is significant, rather than defining the appropriate test a priori. Another example is the failing to recognize that inappropriately large effect sizes can be observed in studies that are underpowered where the number of animals used is small [34,83,85,86]. These invalid interpretations cannot be replicated in follow-up studies that are more appropriately powered or when replication is attempted in a separate laboratory, reflecting ignorance of the importance of determining effect sizes and conducting power calculations before initiating the key experiments [86,87] leading the National Institutes of Health (NIH) to mandate power calculations that validate the number of animals necessary to determine if an effect occurs before it funds a program [88]. This frequently necessitates preliminary, exploratory analyses that should be clearly differentiated from the definitive study, instead of being published as definitive, using the minimal number of animals (n = 3) necessary to populate a Student’s t-test software program or based on investigator ‘experience’ or history. In both examples, statistical analyses are abused and the constraints of the experiments and their interpretation are poorly understood [85]. Moreover, the replication of any finding in the definitive study is absolutely critical, but is done infrequently – if at all.

Please cite this article in press as: Mullane K, et al. Translational paradigms in pharmacology and drug discovery. Biochem Pharmacol (2013), http://dx.doi.org/10.1016/j.bcp.2013.10.019

G Model

BCP-11799; No. of Pages 22 K. Mullane et al. / Biochemical Pharmacology xxx (2013) xxx–xxx

5

4.2.3. Bias by misrepresentation Scientists are by nature inherently optimistic, using the advent of each new technological advance, e.g., mapping of the Human Genome, gene therapy, pluripotent stem cells, antisense, RNAi, or any of the many ‘‘-omics’’ disciplines, to proclaim that a vast array of new drug targets, disease biomarkers, diagnostics and therapeutics will be identified [89], the direct result of which will be that many common diseases will not only be treated but eradicated in the near-term. As hyperbole inevitably gives way to reality, these proclamations are amended and the timelines extended [90] with the previous shortcomings failing to instill any sense of caution for the next round of hyperbole [91]. The latter carries through to the publication arena with the rush to be first to publish a new ‘‘highprofile’’ finding often resulting in ‘‘sloppy science’’ [92]. More significantly this can lead to strong biases in reporting positive rather than negative data [36,93,94] and being less than cautious in extrapolating the findings from a limited data set to make farreaching, ill judged and irresponsible conclusions, e.g., this in vitro observation in a cell line with this compound is a cure for diabetes. The bias introduced in new studies is illustrated by the finding that early replication studies tend to reach the opposite conclusions to the original study – termed the Proteus phenomenon [95,96] – although these replication studies also tend to show some bias in attempts to be provocative and gain a high profile [95]. From the standpoint of translational opportunities it is uncertain which is more disconcerting: the level of bias and data selection identified in initial studies; the finding that 70% of follow-on studies contradict the original observation; or that it is so common and well-recognized a phenomenon that it actually has a name. Selective reporting is widespread e.g., an appraisal of 160 metaanalyses involving animal studies in six neurological conditions, most of which reportedly showed statistically significant benefits of an intervention, found that the ‘‘success rate’’ was too large to be true and only 8 of the 160 could be verified, leading to the conclusion that reporting bias was a key factor [94]. A clinical meta-analysis of brain volume abnormalities in diverse brain structures in patients with various mental health conditions also concluded that there were too many statistically significant associations due to selective reporting and selective analyses [97].

distinction between necessary scientific rigor and deception, and probably contribute substantially to the poor reproducibility of biomedical research findings [35,36,98]. Scientific bias represents a proverbial ‘‘slippery slope’’, from the subjectivity of ‘‘sloppy science’’ [92,101] and lack of reproducibility [102] to the deliberate exclusion or non-reporting of data [35,36] to outright fabrication [40,42,47,49,103,104]. In the latter instance, journal editors are increasingly faced with submissions providing minimal or no estimates of variability. When the submission is questioned or rejected, the authors frequently submit the missing variability information in record time, e.g., in a day, questioning whether there was a lack of oversight in excluding this information or whether it was fabricated in response to the critique. Such a case occurred in an analytical chemistry article where the following comments were found in the supplementary information – ‘‘please insert NMR data here! where are they? and for this compound, just make up an elemental analysis. . .’’ [105]. Plagiarism, distortion of data or its interpretation, physical manipulation of data (e.g., Western blots [66,67], NMR spectra [105,106] to make the outcomes cosmetically more visually appealing or obvious) all contribute to the growing concerns regarding scientific integrity and transparency. Adding to these issues is the selective sharing of clinical trial outcomes [107,108] with inconclusive/negative trials often not reported [107,109], or disclosed minimally in the form of overly positive press releases, that seek to distort the conclusions and disable any widespread opportunity to learn from the outcomes. These issues increase in importance as the outcomes of research bias impact the directions and expectations of future endeavors, such as expenditure of millions of dollars on research programs predicated on ‘breakthroughs’, some of which are then progressed to human trials, and where inappropriate NCEs are advanced into and through clinical trials exposing patients to undue risk – examples include putative therapeutics for Alzheimer’s disease being advanced to Phase III trials in the absence of any evidence of a signal in Phase II [110], and a clinical trial in oncology that had to be stopped when the microarray gene expression studies that underpinned the trial could not be reproduced and significant flaws were identified in the original studies [102–104].

4.2.4. Bias by pressure Bias by pressure occurs as the result of peer pressure to publish to ‘‘keep up’’ or for career advancement, perceived pressure to interpret results along conventional lines, and the forceful influence of mentors who should instead be instilling high ethical standards. The retrospective selection of data for publication at the conclusion of a study can be influenced by prevailing wisdom promoting expectations for particular outcomes (a possible example of ‘positive-herding’ phenomenon [54]) or, where the benefit of hindsight at the conclusion of a study allows an uncomplicated sequence of events to be traced and promulgated, as the only conclusion possible. A survey conducted at MD Anderson reported that 31% of trainees who responded had felt pressured to generate data in support of their mentors’ hypothesis while 50% indicated they were aware of the mentor’s requirement for a high impact journal publication before training could be completed – scenarios fraught with potential for selective reporting of experimental results [98]. Research misconduct in terms of overt fraud [38,39,42,47,49] and plagiarism [99,100] is a topic of major concern but remains relatively rare in research publications. However, data manipulation, data selection and other forms of bias are increasingly prevalent. Whether intentional, the result of inadequate training in both experimentation and ethics, or due to a lack of attention to quality controls, they foster an approach and attitude that blurs the

4.2.5. Literature reporting biases The primary purpose of scientific publication is to share ideas and novel results to foster further developments in the field, fulfilling the dictum of "The pursuit, production, dissemination, application, and preservation of knowledge" [31]. However the increasing prevalence of irreproducible and fraudulent research leading to retraction is of concern to every scientist since it taints the profession by undermining the basic premise of both the research and its publication. While many scientists are dismissive of the problem as being inherent to any human activity and perpetrated by just a small group, there is a larger issue on the fringes of deception that is far more prevalent and of equal concern, where the adoption of certain practices can blur the distinction between valid research and distortion – between ‘‘sloppy science’’, ‘‘misrepresentation’’, and outright fraud [111]. Key contributions to the debate on reproducibility are two papers from the pharmaceuticals industry attempting to replicate published findings. The first from Bayer Healthcare [35] reported that only about 25% of published preclinical studies could be validated for internal drug discovery projects to continue. The second [36] from research efforts conducted at Amgen found that of 53 ‘‘landmark studies’’ only 6 could be reproduced. While these results were troubling, the responses the authors received in following up with the original authors were alarming viz. – ‘‘To address these concerns, when findings could not be reproduced, an attempt was made to contact the original authors, discuss the

Please cite this article in press as: Mullane K, et al. Translational paradigms in pharmacology and drug discovery. Biochem Pharmacol (2013), http://dx.doi.org/10.1016/j.bcp.2013.10.019

G Model

BCP-11799; No. of Pages 22 6

K. Mullane et al. / Biochemical Pharmacology xxx (2013) xxx–xxx

discrepant findings, exchange reagents and repeat experiments under the authors’ direction, occasionally even in the laboratory of the original investigator. These investigators were all competent, wellmeaning scientists who truly wanted to make advances in cancer research. In studies for which findings could be reproduced, authors had paid close attention to controls, reagents, investigator bias and describing the complete data set. For results that could not be reproduced, however, data were not routinely analysed by investigators blinded to the experimental versus control groups. Investigators frequently presented the results of one experiment, such as a single Western-blot analysis. They sometimes said they presented specific experiments that supported their underlying hypothesis, but that were not reflective of the entire data set. There are no guidelines that require all data sets to be reported in a paper; often, original data are removed during the peer review and publication process.’’ Additionally Begley was quoted [112] as meeting ‘‘with the lead scientist of one of the problematic studies. ‘‘We went through the paper line by line, figure by figure,’’ said Begley. ‘‘I explained that we re-did their experiment 50 times and never got their result. He said they’d done it six times and got this result once, but put it in the paper because it made the best story. It’s very disillusioning.’’ These disturbing reports are not – as many in industry know – an uncommon occurrence and highlight the fact that the reproduction of published studies is mandatory in industry when the latter are used to initiate a drug discovery program. When this is not the case, experimental reproduction is rarely attempted. These reports serve as a wake-up call that improvements in scientific rigor and interpretation are necessary and urgent. Accenture and CMR [113] evaluated target-based drug discovery projects in the pharmaceutical industry and found a 97% attrition rate with only 3% reaching the preclinical development stage (suggesting either poor reproducibility or deficient target validation). Publication bias inevitably involves all those involved in data dissemination – journal editors, peer reviewers and readers. 4.2.5.1. Author bias. Following on from the specific publication biases noted above, authors can be biased due to the motivations of the funding system to publish positive data in prestigious journals in the best possible light. As the editor of the Lancet, Richard Horton noted [114] ‘‘A single paper in Lancet and you get your chair and you get your money. It’s your passport to success’’. Accordingly, whether wittingly or unwittingly, authors can select what data is necessary to prove their chosen point and often exaggerate the impact of their findings. As an example, a single finding of a level of activity of a compound in a cell line in vitro is often described as a cure for an intractable disease state despite the fact that the target with which the compound interacts has not been validated and there is no information on either the drug-like properties, side effects or safety of the compound. Were such optimistic conclusions real, there would be an abundance of new therapeutics rather than the current dearth. Additional facets of potential publication bias from authors include what work they cite to provide context for their work – sometimes only themselves – whether they discuss or dismiss contradictory findings and the issue of appropriate authorship. 4.2.5.2. Peer review bias. Peer review bias involves both journal editors and reviewers [115] and can include the following factors: whether a submitted manuscript contains suitable subject matter for the field of interest of the journal; whether it requires peer review; whether the results reported are positive or negative; the editor and/or reviewer’s view of the quality of the laboratory from where the research originates; whether one of the authors is a member of the Howard Hughes Medical Institute/Institute of Medicine/Wellcome Trust/European Research Foundation/CNRS/ Nobel Prize Nominating Committee and/or has an impact on the

review of a grant application submitted by the editor or reviewers; whether the editor or reviewers ‘respect’ or are friends with any of the authors; concerns regarding ‘‘honorary’’ and/or ghost authorship; whether the editor or reviewers has/have a conflict of interest with the findings reported either because the author(s) are in direct competition with the reviewer or loathe one another from a professional vantage point; whether the editor or reviewers have the necessary scientific competence and experience to review the work; and, very importantly, given the pro bono nature of the process, whether sufficient time is available in the review process. The latter is of increasing concern as fewer and fewer scientists consider being a reviewer a worthwhile contribution to their career goals, an unfortunate manifestation of the egocentric component of current science. Because of the issues with retractions and peer review, Nature has recently issued a set of guidelines [116] that specifically ask the reviewer to address questions related to statistics and general methods including replicates, powering and randomization, documentation and validation of reagents, and details on animal and human studies. 4.2.5.3. Reader bias. Reader bias is yet another variation on publication concerns and may include: journal bias, where the reader restricts his/her reading to journals like Science and Nature forsaking all others; confirmation bias, a natural tendency to read and cite information that confirms the reader’s already held beliefs or hypotheses while ignoring everything that contradicts these; selection bias, only remembering data most favorable to an already held bias; and temporal bias, a bias toward more immediate data findings with an absence of appreciation and knowledge of the history of a scientific field [117]. The latter has become a major issue as many readers come to rely almost exclusively on the news media, blogs, reviews and abstracts to keep up to date and as the writers in the mainstream media (Fortune, Forbes, Wired, Slate, Wall St J, NYT, FT, etc.) have demonstrated considerable and informed insight into the biomedical research endeavor both the science and its social and financial implications. This has led to the widely read monograph, ‘‘How to Read A Paper’’ [118]. 4.2.6. Entrepreneurial/biotech bias Entrepreneurial/biotech bias is a variation on the theme of author bias where data are more overtly selected and/or misrepresented usually in PowerPoint format, where only clearly unambiguous data, whether true or false, are shared and where the word ‘‘statistical’’ is casually used as a qualifier for the term ‘‘significant’’ in the total absence of any statistical test being used. This type of bias is justified as avoiding confusion with review boards, scientific advisory boards, etc. This often leads to a simplification (or ‘‘dumbing down’’/filtering out) of data with consequent miscommunication leading to erroneous decisionmaking. A classic example of the consequences of miscommunication is the NASA Space Shuttle Columbia disaster [119] where issues related to a life threatening and ultimately disastrous hole in the wing of the spacecraft were communicated in PowerPoint format, rather than actual technical reports. As the slide deck made its way through successive layers of management who ‘‘simplified/ clarified’’ the content and filtered out of the seriousness of the situation, leading to the breakup of the Shuttle on reentry to the Earth’s atmosphere [120]. While this tragic example may be viewed as somewhat tangential to translational biomedical research, a conclusion reached by the Columbia Return to Flight Task Group was that ‘‘PowerPoint. . . presentations should never be allowed to replace, or even supplement, formal documentation’’ and that ‘‘many young engineers do not understand the need for, or know how to prepare, formal engineering documents such as

Please cite this article in press as: Mullane K, et al. Translational paradigms in pharmacology and drug discovery. Biochem Pharmacol (2013), http://dx.doi.org/10.1016/j.bcp.2013.10.019

G Model

BCP-11799; No. of Pages 22 K. Mullane et al. / Biochemical Pharmacology xxx (2013) xxx–xxx

reports, white papers, or analyses’’ [121], conclusions that are equally applicable to many current situations in biomedical science, especially in the early stage biotech arena, where an absence of data is disguised in the visual aspects of PowerPoint and where the decision makers are often non-scientists. 5. Fundamentals of the translational process While there is considerable discussion around the preclinical/ clinical interface in translational science, the point at which a compound is allowed to proceed into clinical trials, there are a number of enabling data sets that are key not only in facilitating the translational process but also in providing basic parameters for the interpretation of subsequent steps. These include the Genome Wide Association Studies (GWASs) and Next Generation Sequencing (NGS) activities that establish the genetic basis of understanding disease and its incidence, epigenetic and environmental factors that modify the genetic outcomes, the important topics of target and biomarker validation (where much is published but where progress is confounded by real world use [122]) and the hierarchy of preclinical screening activities that progressively qualify a drug target and NCEs that selectivity interact with it en route to the clinic. 5.1. Genome Wide Association Study (GWAS)/Next Generation Sequencing (NGS) With the completion of the Human Genome Project [123] it was prophesized that defining the genetic and molecular basis of all diseases was now within reach [91], and would lead to a vast array of new targets, biomarkers, diagnostics and therapeutics such that many common diseases would be prevented rather than merely treated, although its actual contributions have been less tangible [92]. Nonetheless it has resulted in the generation of a huge amount of genetic information that has now accelerated to a pace where it exceeds the development of computer capacity (Moore’s law, from which Eroom’s law [61] was reverse engineered) and shows no signs of abating [124]. As a result, much of the last decade can be defined as the GWAS era, with the first GWAS published in 2005 identifying variants in the complement factor H gene underlying age-related macular degeneration [125] and considered a transformative technology [126]. GWASs seek to identify disease-related genetic variants associated with a particular phenotype, using genotyping arrays comprised of between 0.1 and 2.3 million single nucleotide polymorphisms (SNPs) per array. Since there are approximately 3.2 billion nucleotide positions in the human genome of 22,000 genes, and the population frequency of germ-line variants is estimated to be on the order of 0.1% or 3 million, the ability to successfully identify key genes with such limited genotyping arrays relies on the finding that variants are inherited together in haplotypes [127]. A SNP in the genotyping array ‘tags’ a particular haplotype which serves as a proxy for variants in adjacent genes by a process termed linkage disequilibrium. Thus the SNP variant identified by GWAS as associated with the disease need not be causal, but is statistically correlated with another variant that is. However, once a region has been ‘tagged’, discerning the single causative variant (if any) can prove difficult. While the vast majority of early GWAS claims based on single teams and without replication were simply wrong [128], more recent large consortium studies with rigorous replication have generated genetic associations with a higher level of credibility. That more such studies continue to be conducted on a regular basis, epitomizing the law of diminishing returns, might represent attempts to justify the financial investment and technical

7

capability rather than an expectation to unmask any transformative genetic variant with dramatic effect size. Debate continues over the value of the technology, but is somewhat semantic since research is an evolutionary process and GWASs have provided some significant discoveries while also highlighting important limitations requiring new technological advances, and the field has now largely moved on to more detailed interrogative techniques. To date, GWAS applications [126,129] include:       

Search for disease-related genes Identification of new targets for drug discovery Disease classification Patient screening as a diagnostic tool Risk stratification of patients Repurposing of existing drugs Pharmacogenomics

5.1.1. Difficulties in interpretation of GWAS The difficulty of defining the relevance of any GWAS-identified variant is compounded by the fact that less than 10% actually occur in the coding region of genes, which makes up less than 2% of the genome. The ENCODE Project [130] is tasked with functionally annotating the non-coding regions of the genome, which are now known to have important regulatory functions. It was recently found that DNA variants associated with common disease traits are concentrated in noncoding regulatory regions of the human genome marked by DNase I hypersensitive sites (DHSs), 93.2% of which overlap with a transcription factor regulatory sequence [131]. These variants disrupt transcription factor recognition sequences, alter allelic chromatin states and form regulatory networks. DHSs containing variants identified by GWAS control distant genes that account for the phenotype – notably beyond the range of linkage disequilibrium sites. Moreover, variants in transcription factor genes can perturb entire networks rather than just single genes. Sections of the noncoding portion of the genome also give rise to small noncoding RNAs that may make important contributions to disease etiology. These small noncoding RNAs include microRNAs (miRNAs), transcribed ultra-conserved regions, small nucleolar RNAs, PIWI-interacting RNAs, large intergenic noncoding RNAs, and a heterogenous group of long noncoding RNAs [132]. The most widely studied to date are the miRNAs that are thought to regulate the translation of more than 60% of protein-coding genes, and can influence cell differentiation, proliferation and survival, so are implicated in cancer, although approximately 70% of miRNAs are found in brain, many specific to neurons where they have been linked to neurodegenerative disease states. GWASs have several other inherent limitations in addition to those cited above, including being restricted to look for common variants; an inability to identify structural variants (e.g., insertions, deletions, inversions and copy number variants (CNVs) that occur often in the human genome and have been related to diseases, e.g., CNS disorders such as schizophrenia and autism [133]) and cover only a part of the genome [126,134]. Moreover, while GWASs have been successful at identifying numerous variants associated with a common disease at high levels of statistical significance, they have generally failed to identify disease genes with large effect sizes, where many of the variants have odds ratios of less than 1.4. Even a summation of the small effect sizes of all identified variants often accounts for less than 20% of the inherited phenotype, giving rise to the concept of ‘‘missing heritability’’ [134,135]; although it should be recognized that the extent to which a particular variant accounts for the ensuing phenotype is a poor prognostic indicator of the significance of that gene for drug discovery. This is exemplified by the gene encoding HMG CoA reductase, which

Please cite this article in press as: Mullane K, et al. Translational paradigms in pharmacology and drug discovery. Biochem Pharmacol (2013), http://dx.doi.org/10.1016/j.bcp.2013.10.019

G Model

BCP-11799; No. of Pages 22 8

K. Mullane et al. / Biochemical Pharmacology xxx (2013) xxx–xxx

determines only a minor portion of the variance in cholesterol levels yet represents an important drug target [136]. 5.1.2. Missing heritability While there is no rationale for a linear addition of the effect sizes of different DNA variants, ‘‘missing heritability’’ has been attributed to various causes including further (currently unidentified) common variants with small effect sizes, rare variants with large effect sizes that are not captured by GWASs, epistatic (gene– gene) interactions and epigenetic (gene–environment interactions) [137]. The proportion of heritability of a trait is defined as that found divided by that expected as calculated from population studies of closely related individuals, usually comparing monozygotic and dizygotic twins. It has been suggested that such a method gives rise to inflated heritability estimates since issues of dominance, epistasis and a shared environment are more prevalent in such subjects, and the amount of ‘‘missing heritability’’ is actually much smaller [138]. This is a challenging to discern since it has been calculated that the detection of genetic interactions in the case of Crohn’s disease for example, would require sample sizes well in excess of 500,000 [138]. 5.1.3. Gene–gene interactions While the significance of gene–gene interactions has been recognized for decades, the breadth of sequence data now available has highlighted the disconnect between genotype and phenotype in certain individuals, indicative of mutations in suppressor or modifier genes. Examples include subjects with known pathogenic mutations in the LDL receptor gene but normal cholesterol levels [139] or those homozygous for null a-1 antitrypsin alleles who do not develop COPD [134]. It is proposed that a mutation in another, suppressor, gene masks the expected phenotype, and that these genes and their products represent alternative therapeutic targets to overcome the disease [134]. Epistasis can also have profound effects in animal models of disease leading to contradictory results and apparent obfuscation regarding the importance of a mediator or pathway. Take for example the role of TGFb to suppress papilloma incidence in mice. The mouse Tgfb1 gene is polymorphic, with different levels of expression in different mouse strains that results in strain-specific susceptibility to developing skin tumors. However, the tumor risk is also dependent on an interaction between Tgfb1 on proximal chromosome 7 and an unlinked modifier locus, Skts15, on proximal chromosome 12, which can mask the effects of Tgfb1 to promote disease risk, and is also strain dependent [140]. Analogous effects have been reported regarding the role of TGFb in the mouse ovalbumin-induced asthma-like response, where Tgfb1+/ mice exhibit enhanced airway hyper-reactivity (AHR) compared to Tgfb1+/+ mice in a strain-specific manner, and a synergistic interaction between TGFb1 genetic modifier loci Tgfbm2 and Tgfbm3 reverses the AHR response, although neither loci alone is effective [141]. This interaction between Tgfbm2 and Tgfbm3 differentially regulates AHR and airway inflammation in response to ovalbumin, enhancing the latter, and suggesting independent genetic control of these responses. Consequently, exploring the role of TGFb in mouse asthma models can produce different results depending on the strain of mouse, background genetics and primary response measured. 5.1.4. Gene identification is only the beginning, not the goal There are approximately 7000 genetic diseases attributed to a single gene product that follow a Mendelian familial inheritance pattern (Online Mendelian Inheritance in Man1 2013; http:// www.omim.org), 50% of which are now associated with a specific molecular defect, representing a major success story for gene hunters. Identifying the gene and the defect does not automatically

provide a road map to new treatments. Huntington’s disease (HD) is a Mendelian disorder, where the culprit gene (htt) was identified in 1983 [142] and the triplet expansion responsible for creating the phenotype in 1993 [143]. The molecular mechanism by which these CAG repeats promote neurodegeneration, or the identification of novel and effective treatments, have not been forthcoming despite two decades of research based on the htt finding [144]. This provides a sobering reminder that identifying the relevant gene, even in a ‘simple’ Mendelian disorder, is only the first step, making HD the ‘‘poster child’’ for the immense difficulty in translating GWAS findings to an effective therapeutic even for a disease with a single causal gene. Genotyping arrays use a single ‘tag’ SNP as a proxy for adjacent genetic variations that might be causal and identified by linkage disequilibrium. However, once a gene has been ‘tagged’, identifying the single causative variant (if any) can prove difficult. Take for example the GWAS findings in childhood asthma where in 2007 Moffat and colleagues identified a critical region on chromosome 17q21 containing 19 genes [145]. Expression data pinpointed ORMDL3, a gene of unknown function, as having the strongest association [146], while subsequent studies suggested it was another gene in this locus, GSDMB, nearby and in linkage disequilibrium with ORMDL3. Identification of the relevant gene is further hampered by the fact that GSDMB does not have an ortholog in the mouse and where this chromosomal region resides on a different chromosome [146]. Despite considerable efforts, the causative gene in this tightly linked locus has yet to be identified. While genotyping is used to aid in patient diagnosis and risk assessment, there is always a balance between sensitivity and specificity that must be struck, and the thresholds can be arbitrary and controversial [133,147]. Genotyping for the three major variants associated with age-related macular degeneration can correctly identify 80% of cases, but has a false positive rate that exceeds 40% [133]. While even minor changes in discriminating disease may have important clinical implications, if there is no good evidence to define the relevant metrics for risk, disease categorization or sub-typing, and no alternative treatment strategies based on that information, then they provide little of value [147]. Currently, for many common diseases, GWAS does not enhance risk assessment much beyond what can be achieved using conventional risk factor assessment and family history. For example in cardiovascular disease, which has a high heritability, family history is a stronger predictor of disease than any mix of genetic markers [148]. Genotyping two variants for atrial fibrillation that are prevalent, robust and replicable, does not improve diagnostic accuracy when added to conventional clinical risk factors [149]. 5.1.5. Assigning relevance by computer algorithms GWASs identify variants via statistical associations in order to generate a hypothesis that represents part of the agnostic, unbiased strategy that underpins the intrinsic ‘‘null" value of such studies. Subsequent functional studies are then required to determine if any identified gene variants impact protein function and are causal to the phenotype with polymorphisms potentially causing gain-offunction, loss-of-function, gain-of-toxic-function or no change in the encoded protein. Determining functional relevance represents a major bottleneck in establishing the importance of any genetic associations since appropriate assays are not readily available for many gene products and take a large amount of time to institute. Trying to do this simultaneously for the multiple genetic variants found in most common diseases is challenging. Absent readily available functional measures of relevance, various bioinformatics algorithms have been developed to predict if a given variant is likely to impart a functional change in the encoded product. These algorithms all have issues of specificity and sensitivity, while

Please cite this article in press as: Mullane K, et al. Translational paradigms in pharmacology and drug discovery. Biochem Pharmacol (2013), http://dx.doi.org/10.1016/j.bcp.2013.10.019

G Model

BCP-11799; No. of Pages 22 K. Mullane et al. / Biochemical Pharmacology xxx (2013) xxx–xxx

agreement across algorithms is notoriously poor with less than 5% of variants being predicted consistently [150,149]. 5.1.6. Gene hunting and the one that got away – function SNPs ‘tagging’ haplotypes based on statistical associations in population studies are typically statistically analyzed for linkage to other genetic variants in adjacent areas or, if they occur in noncoding DHSs, for transcriptional regulation of other genes at more remote sites, all of which can amplify the number of potential targets. To assess whether the variant impacts the function of the gene product, bioinformatics algorithms are applied, often based on evolutionary and cross-species consistency. The ensuing data is lodged in a public database for access to others who can conduct further computational analyses. Each of these steps has significant limitations that risk becoming compounded, but the major victim is often biology – placing all of this information in a functional context. However, biology can become the proverbial ‘‘two-edged sword’’ in the context of gene hunting. On the one hand, the unbiased premise of identifying gene associations with a particular disease is compromised if the results are then interpreted in light of currently favored hypotheses as exemplified by the subjectivity and contortions created to associate newly identified gene variants in AD with amyloid [110,151,152]; or the ability to attribute relevance to a variant stalls if it does not match favored preconceived mechanistic links. One example is the strong association between ORMDL3-GSDMB and asthma [146]. Indeed one may wonder to what extent searching for a key causal variant stops as soon as a SNP associated with a target already considered related to the disease is identified. Apropos to relating gene mutations to disease is the urge ‘‘to link variants found in randomly selected ‘control’ individuals to different diseases and phenotypes with plausible sounding arguments’’ [153]. Conversely, the relevance of any genetic association can only be established based on biological function as the ultimate arbiter. However, as indicated above, this may be difficult, and a ready means to probe gene associations frequently does not exist. As a result it becomes much easier to be a ‘‘scientific voyeur’’, using algorithms to interrogate existing databases, and deriving statistical associations and artificial measures of relevance that leads to new publications. With the public availability of genome-wide expression data it is not even necessary to generate the database or have any intellectual investment in the data, its collection, transformation or annotation but rather accept it at face value. For example, an analysis of publications stemming from the publicly available databases from five ‘‘ArrayExpress’’ studies published in 2011 identified at least 90 publications in just the ensuing year utilizing these datasets [154]. Even more disconcerting, it was judged that while 38/90 of the publications probed ‘‘biological questions’’, none (as in zero) addressed functional biology – 18 using the public database for replication or confirmation of independent datasets, while 20 re-analyzed or performed a meta-analysis of the existing datasets. The ready access of gene association databases, computational power and bioinformatics challenges the definition of ‘‘experiment’’ that has as its implicit purpose, the testing of a hypothesis. Instead fishing (often using dynamite rather than the implied targeted approach of ‘‘gene-hunting’’) has become the sport, where claims to a lack of bias (often missing as a database is manually curated in the light of existing mechanistic theories of disease causality [155]) mask the absence of a testable postulate using a null hypothesis, and where randomness is the order of the day. The middle ground may lie in guidelines, now a decade old [156] for the conduct of DNA microarray experiments, where ‘‘the siren song of microarrays’’ necessitates replication and context. Certainly global analyses of different datasets can play a critical role in replicating conclusions of separate studies, while increasing

9

the power to identify rare variants not found in individual studies. But frequently the number of meta-analyses exceeds the number of primary studies and simply treating all datasets as equal and combining them to extract information is potentially fraught with problems [154]. Technical differences e.g., probe sequence, array platform, and laboratory effects can all impact the data generated. An attempt to reproduce 18 published microarray-based studies found that only 2 could be replicated, another 6 partially reproduced while 10 could not be repeated, yet the data for all 18 were lodged in publicly available databases [157]. In addition, the strength of the data within a database can also be variable and subject to change. Two-thirds of the mutations listed as pathogenic in the Human Gene Mutation Database (HGMD) were subsequently found to be benign [158]. The many degrees of separation that exist between the scientist interrogating multiple public databases and the scientist who understands the details that went into developing and replicating the datasets can lead to many important aspects being ignored, resulting in a flawed output. Even when the investigator is intimately involved, adequate oversight and replication of critical aspects can be lacking with serious consequences. This concern is exemplified by the previously cited clinical trial in cancer based on gene expression data from microarray studies in cancer cell lines that had to be stopped when the data could not be reproduced independently, significant errors were identified and the papers retracted [103,104]. 5.1.7. ‘‘Next Generation’’ Sequencing (NGS) technologies The twin difficulties of identifying causal variants and accounting for ‘missing heritability’ coupled with the development of massively parallel sequencing at an acceptable cost, has resulted in the development of whole genome sequencing (WGS) and whole exome sequencing (WES) technologies [149,159]. Since it is more difficult to determine the functional relevance of variants that occur in the non-coding portions of the genome, and successes in Mendelian-inherited disorders have localized those critical mutations to the exomic region, some investigators have focused on WES as the best approach on the basis that analysis and interpretation of any associations would be easier. However, even WES identifies around 20–25,000 variants, which are then sorted and prioritized based on rarity and ‘penetrance’ (fully penetrant being where all individuals carrying the mutation show evidence of the disease). Rarity as a prioritizing parameter is predicated on the notion that only rare mutations can be important, and is assessed by absence from any existing databases, although as pointed out by Brunham and Hayden [134] with the ever expanding deposition of genome and exome sequences in public databases ‘‘the assumption that any variant found in these databases can be ruled out as disease-causing becomes increasingly difficult to justify’’. The integration of WGS/WES datasets with linkage analysis has shown some success as an alternative method to narrow down the list of candidate genes where feasible [135]. 5.1.7.1. High-throughput gene sequencing: the genomic equivalent of ‘‘some assembly required’’. The difficulty of reconstructing the genome from high throughput gene sequencing has been likened to using a wood chipper to shred 1000 copies of Dickens’ novel ‘‘A Tale of Two Cities’’ and then piecing together one complete, accurate book [160]. The task is made more difficult by long stretches of repeats (e.g., GC-rich regions) that are difficult to sequence, resulting in a biased focus on the accessible portions of the genome, and an incomplete picture. While the technology continues to improve, current high-throughput techniques fall short of the goal of mapping the entire genome, and require the labor-intensive, low throughput Sanger sequencing methods to

Please cite this article in press as: Mullane K, et al. Translational paradigms in pharmacology and drug discovery. Biochem Pharmacol (2013), http://dx.doi.org/10.1016/j.bcp.2013.10.019

G Model

BCP-11799; No. of Pages 22 10

K. Mullane et al. / Biochemical Pharmacology xxx (2013) xxx–xxx

help fill in some gaps. While WGS/WES are predicated on identifying differences between individuals that can account for a particular phenotype, a further complicating factor is genomic variation within individuals where differentiated tissues can each have their own ‘personal’ genome, termed mosaicism [161,162]. Application of these high throughput sequencing technologies has revealed that mosaicism is much more common than thought previously [162]. Consequently, the genomic analysis performed on an individual reflects the tissue sampled and the average genome of the cells examined. Ostensibly, genomic differences within an individual can arise from a number of causes, including genomic instability and mutations, or environmental factors such as exposure to tobacco smoke, carcinogens or triggered by infection with a virus or microbe [161]. While somatic mosaicism has been implicated in cancer and neurodevelopmental and neuropsychiatric disorders, it has been posited that it might also play a beneficial role in healthy tissues. Regardless, it adds a further dimension of complexity in re-assembling genomic sequences and defining their linkage to disease. 5.1.8. RNA sequencing An alternative to high-throughput gene sequencing is examining the cellular RNA utilizing highly parallel microarrays and high-throughput sequencing technologies, since this can also provide information on which genes are turned on or off, and on alternative splicing variants – factors that are considered to more closely relate to the phenotype [163]. Transcriptome analysis, looking at the full range of mRNA levels at any given time to mirror which genes are actively expressed, is more sensitive and provides more quantitative information when compared to gene expression profiling. The transcriptome is cell-type specific, so it is necessary to assess homogeneous cell populations, which can be rather difficult in vivo. RNA expression is also controlled extensively by miRNAs that are produced by cells in greater abundance than the mRNA. Moreover, the assumption that mRNA expression correlates closely with synthesis of the encoded protein is inaccurate, so mRNA microarrays do not really afford information on translational status. Since mRNAs bind to ribosomes for translation, another approach is to evaluate ribosomal RNA as a more accurate indicator of protein synthesis. However, even in this case the technique does not differentiate between translationally active and repressed mRNAs, and the relationship with protein formation is not binary, where small variations in mRNA expression can elicit large changes in protein levels [164]. Messenger RNAs do not occur as ‘‘naked’’ ribonucleic acid sequences, but complex with specific RNA binding proteins (RBPs) that regulate the structure, localization and function of both coding and non-coding RNAs, playing a fundamental role in cellular function [164]. RBPs can be cell-type specific, and the interactions between mRNA and RBPs are dynamic with mRNAs shuttling between different RBPs in response to intra- and extracellular signals. Mutations in RBPs that disrupt their function and lead to dysregulation of RNA processing have been implicated as contributing to certain disease states. Some cases of amyotrophic lateral sclerosis (ALS) have been linked to mutations in the neuronal RBPs TDP-43 (transactive response DNA binding protein 43) and FUS/TLS (Fused in Sarcoma/Translocated in liposarcoma) [165]. TDP-43 is involved in the transport, transcription and splicing of mRNA, and regulating miRNA metabolism. Depletion of TDP-43 in mice in vivo with antisense oligonucleotides alters the levels of 601 mRNAs (including several transcripts encoding proteins associated with neurodegenerative disease), and causes 965 altered splicing events [166], indicating the extent to which modifications of RBPs can have widespread implications.

5.1.9. Statistical analysis An important issue for all of these Next Generation Sequencing technologies relates to statistical analysis of the data. It is generally recognized that without adequate statistical rigor in GWASs, where a threshold of p < 5  10 8 has been developed for the multiplicity of comparisons, far more false-positive findings would have been reported. But with sequence data, and the different types of variants, it is not reasonable to assume that each has an equal chance of occurring or has an equivalent influence on the disease genotype. Currently there is too little information to develop simple quantitative parameters for the sequence information, and current philosophy is to evaluate large sample sizes and replicate any findings to reduce the error rate, but a common statistical basis has not yet been developed. 5.1.10. Interpreting genotyping studies: application to autoimmune disorders (AIDs) One example of the value of genotyping is in autoimmune disorders (AIDs) that share common clinical and immunological features, and cluster, as a group of diseases, in families [167,168]. Some of these disorders are organ specific – type 1 diabetes (T1D) targets the pancreas for example; some, like systemic lupus erythematosus (SLE) are systemic and attack multiple organs; while the majority have a primary target but are associated with additional symptoms and pathologies, such as rheumatoid arthritis (RA) affecting synovial joints and inflammatory bowel disease (IBD) the gastrointestinal tract, both frequently accompanied by fever and fatigue. Clinically, AID can be divided into two groups depending on the significance of autoantibodies – seropositive AID such as RA, T1D and celiac disease; and seronegative AID including Crohn’s disease (CD), psoriasis and ankylosing spondylitis (AS). Despite these categorizations shared genetic susceptibility loci have been identified across this range of disparate phenotypes, and there are significant overlaps in T cellmediated pathways promoting disease. Indeed the common features and underlying mechanisms has prompted many researchers to take drugs that work in one AID and apply to any other with the full expectation that the compounds will show benefit. The inter-relationships between AID has been explored systematically by genotyping large patient populations covering six AIDs (AS, IBD, RA, T1D, psoriasis and celiac disease) using a shared SNP array termed the ‘Immunochip’ [168]. This analysis found that while sharing of genetic loci was common, with 71 loci being statistically associated with two or more diseases, the relations were complex and often opposite. While for some AID there were uniform associations – with IBD and AS for example showing correlation of the lead SNP at 19 out of 20 shared loci – the arthropies RA and AS shared only five loci, but not all were concordant. Indeed, perhaps the most intriguing finding is the number of loci shared between AID, where the same SNP is strongly associated with disease, but in opposite directions. Among the examples provided by Parkes et al. [168] are that variants at IL27, IL-10, STAT3, CD40 and FCGR2C (CD32) are related to increased risk of CD, ulcerative colitis, Behc¸et’s disease and AS, but are protective against RA, T1D, SLE and multiple sclerosis. A variant in the p40 subunit shared by IL-12 and IL-23 is associated with increased risk for CD and AS, but protective for psoriasis, such that these discordant associations are found both within and across serotypes. Similar observations have been made with respect to asthma, widely regarded as primarily a Th2 disorder, in comparison to other AIDs [169]. Asthma GWASs reproducibly associate the ORMDL3-GSDMB region with the disease, while this same region is also associated with RA, CD, and ulcerative colitis; but while the variant in GSDMB is protective for asthma, it is a risk allele for the

Please cite this article in press as: Mullane K, et al. Translational paradigms in pharmacology and drug discovery. Biochem Pharmacol (2013), http://dx.doi.org/10.1016/j.bcp.2013.10.019

G Model

BCP-11799; No. of Pages 22 K. Mullane et al. / Biochemical Pharmacology xxx (2013) xxx–xxx

other AIDs. Generally, genes involved in Th1/Th2 balance, such as IL-13, or antigen presentation (HLA-DRA) were discordant between asthma and AID (both risk alleles for asthma but protective for psoriasis or ulcerative colitis, respectively); while genes involved in the regulatory T-cell pathway were concordant, including SMAD3, C11orf30-LRRC32, and IKZF4 [169]. Despite not knowing the importance of these genetic variants to the phenotypes associated with each AID, these findings have important implications for translational research, and demonstrate that the differences between AID are far more nuanced than sero-typing or defining T cell sub-types would reveal. Developing an immunomodulatory drug without recognizing these distinctions, and expecting it to work broadly in AID is both naı¨ve and potentially dangerous. For example, IL-17 monoclonal antibodies show efficacy in psoriasis, but are problematic in CD [170]. A variant in TNFR1 is protective for AS, but increases the risk for MS, with the result that therapies targeting TNFa suppression are highly effective in AS, but detrimental in MS [171]. 5.1.11. Epigenetics Epigenetics refers to the heritable alterations in gene expression that do not involve variation in DNA sequence, and can be modified by environmental factors and certain drugs [172,173]. Examples of the influence of gene–environment interactions involve: (i) studies on Dutch families prenatally exposed to famine in the winter after World War II, and caloric restriction in mice during pregnancy, prompting epigenetic reprogramming that persisted for up to two generations and may be linked to later life diseases [174]; and (ii) the increased incidence of schizophrenia associated with birth in winter months [175]. There are three major mechanisms of epigenetic regulation – histone modifications, covalent DNA modifications and regulation by non-coding RNAs [176]. Modifying histones by adding or subtracting functional groups alters chromatin structure to stimulate or inhibit gene expression. At least 16 histone modifications have been described including ubiquitination, phosphorylation, and sumoylation, but the two best studied are acetylation and methylation, where histone acetylation, for example, is associated with increased gene transcription and expression. It is important to recognize that enzymes affecting histone acetylation (HAT, HDAC) do not act only on histones, but can modify acetylation of a number of proteins including p53, STAT3 and HIF1a. Consequently, HDAC inhibitors such as valproic acid, vorinostat and romidepsin can affect transcription factors, cell cycle and apoptosis pathways, among others, in a histone-independent manner [174], so their effects cannot be ascribed to epigenetic modifications without adequate supporting evidence. DNA can also be modified by methylation principally of cytosines that are immediately followed by guanines, at sites termed CpG islands that occur near the gene promoter and result in gene repression. (This contrasts with methylation of cytosines in the exome that do not appear to decrease transcription). Whole genome bisulfite sequencing is used to measure DNA methylation at the tissue or cellular level, and excessive methylation at CpG islands adjacent to promoters of tumor suppressor genes has been linked to certain cancers [177]. While the protein coding region makes up less than 2% of the human genome, 70% is actually transcribed, resulting in a plethora of noncoding RNAs thought to contribute to the epigenetic regulation of gene expression. While miRNAs – those approximately 22 nucleotides long – are the best studied, form part of the RNA-induced silencing complex, and have been associated with diseases such as heart failure and cardiac hypertrophy [178] they are only one of several noncoding RNAs implicated. Moreover, epigenetic modifications are cell and tissue specific, while all three epigenetic mechanisms can interact to

11

form one large regulatory network, adding to the challenges of unraveling this burgeoning field. 5.1.12. Gene hunting: hope or hubris? The identification of genetic variants and even specific genes of interest has increased enormously, and will increase exponentially as high-throughput technologies are more widely implemented. With these successes has come the recognition that gene expression is ever more complex. The phenotype resulting from a mutation is context-dependent – e.g., it is modified by the genetic background, epistasis, epigenetics, and other factors. But gene expression is also a highly regulated and dynamic process, aspects of which are often considered singularly, such as gene–gene interactions, transcriptional regulation, etc., and in the abstract. Gene sequences are usually presented in a linear fashion, although of course, they are anything but. A more accurate portrayal of chromosomal structure based on single cell Hi-C, a variation of the chromosome conformation capture (3C) technique [179], has been compared to an ‘‘insane ball of spaghetti’’ [180] reflecting the myriad potential for direct physical interactions that can occur between chromosomes to regulate genes that otherwise do not appear to be related – events that have only been studied in a limited manner to date. The significance of physical interactions is demonstrated further by the formation of chromatin loops that bring remotely located sequences into spatial proximity to up- or down-regulate genes. Larger chromatin loops are thought to segregate complete regions of the genome from each other, and position them into distinct nuclear compartments to regulate their activity [181], adding additional complexity to what was previously assumed to be a relatively simple and predictable cartographic representation [180]. Aside from the structure of the chromosomes, another aspect to be considered in gene regulation is the architecture of the nucleus, which also has discrete compartments [182,183]. Gene expression involves multiple steps – chromatin remodeling, RNA processing and export, then translation in the cytoplasm. Each of these events involves many components all acting in a coordinated fashion and in distinct intranuclear compartments at particular locations. Moreover, a host of proteins is involved in each of these steps, which are highly mobile within the nucleus, creating an extremely dynamic situation [182]. Even the spatial positioning of chromosomes and genes within the nucleus is an organized process, since in general, genes localized toward the nuclear edge tend to be transcriptionally silenced, and when the gene is activated it moves toward the interior of the nucleus. The association of the cystic fibrosis gene with the nuclear envelope correlates closely with transcriptional activity [181]. These limited examples of the importance of chromosomal and nuclear architecture to gene expression serve to further emphasize that the identification of a SNP or mutation is a relatively small piece of a much large puzzle. Thus the recent findings in chromosome structure highlight the relationship between the etiology of cancer and nuclear morphology and disorganization that has been recognized for nearly a century and a half and includes DNA content (ploidy) and chromatin organization – observations that intersect with gene regulation, epigenetic modulation and cancer genesis [184]. 5.1.13. PheWAS – Phenome-Wide Association Studies PheWAS – Phenome-Wide Association Studies are an emerging technology that can be used as a form of reverse GWAS to determine a range of clinical phenotypes (e.g., disease states) associated with a given genotype [185,186]. In contrast to GWAS that is used to select from many genetic variants those associated with a single phenotype, PheWAS is designed to select among many phenotypes or yet another ‘ome’, the ‘phenome’ those that are associated with a single gene which allows related phenotypes

Please cite this article in press as: Mullane K, et al. Translational paradigms in pharmacology and drug discovery. Biochem Pharmacol (2013), http://dx.doi.org/10.1016/j.bcp.2013.10.019

G Model

BCP-11799; No. of Pages 22 12

K. Mullane et al. / Biochemical Pharmacology xxx (2013) xxx–xxx

(e.g., schizophrenia, bipolar disorder) to be included in a single analysis. When used in an unbiased manner, PheWAS has the potential to identify new genetic associations and provide additional insight into disease mechanisms [185]. PheWAS was used to independently confirm literature findings that the Class II MHC allele, HLA-DRB1*1501 was associated with MS, alcoholinduced cirrhosis of the liver, erythematous conditions and benign neoplasms of the respiratory and intrathoracic organs [187]. The authors noted that ‘‘this was the first external validation of PheWAS’’ demonstrating ‘‘the complex etiologies associated with the HLA-DRB1*1501 loci.’’ 5.2. Target validation/qualification Target validation, like that of biomarkers [188] is an essential part of the translational medicine paradigm and logically follows on from the process of identifying a target of interest for a given therapeutic indication [189,25]. While much has been written regarding the multiple in vitro and in vivo approaches to validate a target, the precise meaning of the term validation has been a subject of considerable debate. One definition that sets a rather low hurdle is ‘‘The process of gathering information about a potential drug target prior to initiating a screen to find biological or chemical modulators of the target of interest’’ [190], ostensibly a literaturebased, collation activity which many in the field would view as the minimal information necessary to even consider a target, let alone justify initiation of a screening effort. However without a working definition of the target validation, progress and success in the process will be impossible to quantify. In an absolute sense, the ONLY validated drug target is one at which a safe, efficacious and selective compound, agonist or antagonist with drug-like properties, produces a quantifiable and robust effect that is beneficial in the targeted disease population [191,192]. Everything else that occurs prior to this final step is more accurately termed target confidence building [192] or target qualification [193] with each new piece of data related to a target adding confidence in a hierarchical and ongoing manner – further qualifying the target not a ‘‘one-time experiment’’ [194]. Rather than being an issue of semantics, the erroneous assumption that a target can be definitively validated preclinically or in the early stages of clinical trials frequently provides a level of confidence that is premature and probably is a significant contributor to multiple Phase II clinical failures. For instance, numerous clinical candidates in the AD area acting via various mechanisms affecting amyloid accumulation have failed pivotal late stage trials for the simple reason that there is no conclusive evidence that amyloid is causative to the disease [110]. On the other hand, with a best in class rather than first in class approach [15,195], the validation of the target has to a very major extent been de-risked other than issues with off-target activities and potential side effects unique to the ‘‘best in class" fast follower compound(s). The use of the term target qualification thus sets more realistic expectations in the drug discovery process and can be informed by a variety of means that provide data that once integrated and prioritized can enable the final validation process that is outlined in Fig. 1. An initial step is whether the target under investigation is present in the tissue involved in the disease (e.g., in the CNS for a psychiatric disorder) and whether there is evidence from a diseased population implicating changes in the target paralleling the disease and its progression and prognosis. Knowing the function and the phenotype of the target in non-diseased tissue, its normal physiology, is also a critical step. Additional activities in the qualification process include functional pharmacology approaches including expression profiling, target knockout (including tissue-restricted and inducible),

blockade using RNA interference, antibody, antisense, small molecule or monoclonal antibodies, target overexpression including small molecule approaches [196], chemogenomics [197] and chemical reporters [198]. Additional facets include animal models, biomarker assessment and patient and clinical trial feedback, with the former including geno- and pheno-types, GWAS/NGS, experiments of nature and systems integration and analysis (Fig. 1). A compelling approach to target qualification involves experiments of nature – where naturally occurring mutations or unique response modifiers that impact the activity of a particular protein show a parallel relationship with a disease or a treatment [191]. Experiments of nature can be used to define pathways of interest if a specific target cannot be identified readily or is deemed ‘‘undruggable’’. Such is the case in familial hypercholesterolemia in which patients with a mutation in the gene for the LDL receptor provided a causal link between LDL cholesterol and heart disease. Since the LDL receptor per se was not a viable target, but HMG-CoA reductase was known to be the rate-limiting enzyme in cholesterol biosynthesis, that became an obvious target for reducing cholesterol levels and led to the development of the statins [199]. Other examples of experiments of nature include gain-of function mutations [191] such as those for PCSK9 (proprotein convertase subtilisin/kexin type 9) which is associated with autosomal dominant high LDL levels and coronary heart disease [200] and the SCN9A (voltage-gated sodium channel, Nav1.7) channelopathy associated with primary erythermalgia [201]. While several such gene–drug pairs have been identified both historically and prospectively, and the application of human genetic information is an attractive and logical approach to target qualification, ‘‘experiments of nature’’ are relatively few in number, with retrospective associations and putative repurposing efforts with approved drugs representing more typical outcomes from this approach [191]. One recent experiment of nature was based on the finding in the blood of a single patient a natural IgG antibody against exosite 1 of thrombin that uniquely conferred anticoagulation effects without a bleeding tendency – the holy grail of anticoagulant drug discovery – that led to the development of the IgG antibody, ichorcumab (http://www.tcpinnovations.com/ drugbaron/ichorcumab-the-blood-of-the-gods/). Given the complications associated with GWAS and NGS studies (Section 5), genetics-based information - like other sophisticated cutting edge technologies that include expression profiling, target knockout and target overexpression in genetically engineered animal models [20], chemogenomics and chemical reporters - also requires systematic validation to promote a facile approach to target qualification, and by extrapolation, translation. Like the identification and validation of biomarkers [16,189], the effective translation of NCEs from efficacy in animal models to patients involves myriad challenges in the context of target qualification activities from an exercise in determining the practical utility for new technologies to its routine use in the drug discovery process. In this setting in 1986, Black had noted [202] in the context of small molecules, that ‘‘Pharmacologically classified drugs can be used at two levels: to manipulate biosystems at the physiological level, which is their main use in therapeutics, and to probe biosystems at the biochemical level to uncover mechanisms and regulations. In the first case, the analytical classification is interesting but nearly irrelevant. In the second case, the analytical classification can be beguiling but misleading.’’ Black’s concern that drugs – and by extrapolation – tool compounds – can be ‘‘beguiling and misleading’’ relates to insights that investigator confidence in new discoveries (and technologies) often results from a misplaced confidence that is dependent on supportive science rather than a detailed interrogation, resolution and reaffirmation of the underlying assumptions. This then leads to

Please cite this article in press as: Mullane K, et al. Translational paradigms in pharmacology and drug discovery. Biochem Pharmacol (2013), http://dx.doi.org/10.1016/j.bcp.2013.10.019

G Model

BCP-11799; No. of Pages 22 K. Mullane et al. / Biochemical Pharmacology xxx (2013) xxx–xxx

Genotype • • • • • •

Cellullar phenotype

Transcriptome

• • • • •

• mRNA • ribosomal RNA • RNA binding proteins

GWAS NGS PheWAS Epigenecs Twin studies Experiments of nature

Proteomics

Primary cells Cell-lines Embryonic stem cells iPS cells Reprogrammed cells

13

Funconal Pharmacology • Knock-out or knock-down gene, siRNA, Abs, drugs, research compounds • Target overexpression • Reporters - chemical, fluorescent, etc.

Qualified target

Normal human • Phenotype • Genotype • Funcon

Animal models

Translatable target

Paent • • • • •

Phenotype Genotype Symptoms Family history Risk factors

Systems Integraon and Analysis • • • • •

Validaon Translatability Druggability PK/PD relaonship Pharmacogenomics

Clinical Trials • • • • •

Phase 0-IIa PK/PD Safety Target engagement Biomarker

• Mulple models (wild and transgenic) • More than a single species • Clinically relevant endpoints • Matched to drug exposure

Pharmacokinecs • ADME • Target engagement • More than a single species

Biomarker • • • • •

Disease-associated Target-associated Quantave Specific Readily accessible

Safety assessment • Toxicity • Carcinogenicity • Organ funcon

Validated target

Fig. 1. Translatable, qualified and validated targets. While the phrase, validated target, is used throughout the drug discovery process, its precise meaning – where stated – is often confused with the activity of qualifying a target [193], an iterative, hierarchical process that builds confidence [192]; and in determining the utility of the target as agents that selectivity interact with it are advanced to the clinic. The only validated drug target is one at which a safe, efficacious and selective drug produces a quantifiable and robust effect that is beneficial in a patient with the targeted disease [191,192], so for research and drug discovery purposes it is usually a misnomer. The present figure reflects the activities that contribute to the identification of a translatable target that can progress to a qualified target, and ultimately to a validated target. Translatable target – data from individuals without the disease indication being studied can be assessed to identify the ‘‘normal’’ genotype, phenotype and function of the target of interest. This data can then be complemented with information on the same target in diseased patients. These data sets can then be subject to genotyping, transcriptomic and proteomic assessment, cellular phenotyping, functional pharmacological analysis together with animal model studies. Data from these disparate approaches including experiments of nature [191] can then be integrated with available biomarker data and information, where available, from clinical trials to analyze translatability that may entail additional data generation to address specific points of interest. It would be reasonable to assume that a biomarker would be closely related to a translatable target or pathway(s) associated with its function [276]. With the use of chemogenomics [197], chemical reporters [198] and screening [15,16,205], New Chemical Entities (NCEs) can be identified as tool compounds to facilitate interrogation of cellular, functional and animal systems. Elements of NCE qualification/validation have been included in the figure in terms of PK/PD relationships [21,242] including ADME, target engagement [18,19], translatability [278–280] and safety [243]. Once an NCE enters clinical trials, data from the trials on safety, biomarkers, engagement of the target in humans and PK/PD relationships can be used to both challenge and reinforce data sets derived from the prior data from normal/patient databases, etc., to add additional precision and insights to the translatability of the target. This is obviously an ongoing, integrative, non-linear process that requires considerable intellectual commitment and transparency. Qualified target – from the activities described above – a translatable target will become progressively more qualified, waiting on robust Phase II proof of concept data to move to validated status, or the alternative that it is erroneously ascribed as qualified.

a knock-on effect where NCEs are advanced to clinical trials on the basis of less than robust preclinical findings which when coupled with an absence of validated biomarkers contributes to Phase II attrition [6]. A recurrent theme is that science, as is currently conducted, is failing due to investigator overconfidence, the ‘‘pretense-of-wisdom’’ syndrome outlined by Braff [203] borrowing from the ‘‘Pretense-of knowledge’’ syndrome in macroeconomics described by Caballero – where ‘‘a dynamic stochastic general equilibrium approach ‘‘has become so mesmerized with its own internal logic that it has begun to confuse the precision it has achieved about its own world with the precision that it has about the real one’’ leading to yet another illogical bias – that the complexity in nature is often underappreciated [180,204]. 5.3. Target-based versus phenotypic screening approaches Despite the major focus on target-based approaches to drug discovery, phenotypic or activity-based screening [205] has continued to be a major source of new drugs or leads for new

drugs [14,15]. While target-based approaches are often considered intellectually superior since they are rational, systematic and readily lend themselves to reductionistic technologies – e.g., allowing large numbers of compounds to be evaluated via HTS, ‘‘hits’’ selected by computer algorithms, then refined by focused library chemistry, designed to expedite the discovery process with minimal human input or interference [8] - they have not proven to be as effective as phenotypic screening approaches. Phenotypic screening [205], whether conducted in animals, cells or lower organisms (e.g., Caenorhabditis elegans, Drosophila melanogaster), is far slower than HTS, requiring more quantitation and interpretation as compounds are analyzed and subsequently optimized. It is also richer in content, so what is lost in speed can be made up for in quality [206]. Both methods have their strengths and weaknesses, but given the dominant position of target-based drug discovery efforts over the last two decades, this approach is viewed as a major contributor to the declining success in effective compound translation to Phase II proof of concept [8,62,91].

Please cite this article in press as: Mullane K, et al. Translational paradigms in pharmacology and drug discovery. Biochem Pharmacol (2013), http://dx.doi.org/10.1016/j.bcp.2013.10.019

G Model

BCP-11799; No. of Pages 22 14

K. Mullane et al. / Biochemical Pharmacology xxx (2013) xxx–xxx

With this backdrop, it is perhaps surprising how much phenotypic screening methods have contributed to successful drug discovery [14] especially in the CNS area [207] where disease causality is complex [208]. A major factor in the outcomes from phenotypic and target based approaches is that the latter obviously requires identification of a target that is pivotal to disease pathophysiology. Many times that ‘‘target qualification’’ component (see Section 5.2) comes up short with respect to the clinical condition and reflects the limited understanding of disease pathophysiology, especially in diseases where there may be multiple targets leading to a compound requiring interactions with several targets – a polypharmic or ‘‘magic shotgun’’ profile – to achieve efficacy [209,210]. An example of this approach is in the area of schizophrenia where the complexity of the disease – both genetic and environmental – requires efficacious compounds to interact with dopamine and 5-HT receptors [209,211]. There are also numerous examples where compounds that are selective for a target and show convincing efficacy and safety in animal models, e.g., p38 kinase in rheumatoid arthritis [212], IL-4 in asthma [213], NK-1 neurokinin receptors in pain [214] and b-amyloid accumulation in AD [110], have repetitively failed in clinical trials suggesting that in these instances, target selectivity – to the extent that the therapeutic target has been validated – and animal efficacy represent only a portion of disease causality while there is ample evidence that these diseases/conditions are multifactorial. This contrasts with biologicals, e.g., antibody therapeutics, where selectivity for a single target confers remarkable efficacy benefits such that there are more than 20 of these entities approved for use mainly in autoimmune and inflammatory disease states and cancer [215]. The value of phenotypic screening is evidenced by its emergence as an important tool in the search for new targets resulting from genotyping cells derived from patients. This approach is predicated on the finding that some subjects carrying a genetic variant for a disease do not develop the expected phenotype (as discussed in Section 5.1.3.), an outcome attributed to the presence of other mutations that effectively suppress the expression of the disease variant. Genetic modifier screens, identifying a phenotype associated with a particular genetic mutation and then performing an unbiased search for genes that enhance or suppress the phenotype are being used increasingly in target discovery and validation in yeast, mammalian cell-culture systems, as well as lower organisms. For example, a number of human hereditary disorders are attributed to trinucleotide genomic repeats e.g., CAG in HD [143], CTG in myotonic dystrophy type 1 [216], and CGG in fragile X syndrome [217]. These long repeats show instability with disease occurring when the number of repeats exceeds a certain threshold. A screen for genetic modifiers in Drosophila expressing CAG repeats revealed several candidate genes that affected repeat instability suggesting that different aspects of the instability are under independent genetic control [218]. An extension of the search for genetic modifiers is for chemical modifiers – traditional phenotypic screening of small molecules in another guise – as exemplified by the identification of compounds acting via Hedgehog, Insulin-like growth factor or Transforming growth factor b signaling pathways to increase or decrease cardiomyocyte proliferation in zebrafish embryos [219]. 5.4. Natural products as a basis for phenotypic discovery efforts With concerns related to productivity metrics using a targetbased approach to drug discovery, there has been considerable interest in revisiting the ultimate phenotype-dependent drug source, natural products [220]. If target selectivity exemplified by monoclonal antibodies lies at one end of the selectivity spectrum, natural products as

therapeutics lie at the other. Often natural product efficacy depends on an activity profile that is notoriously lacking in selectivity and specificity. With the resurgence of interest in Traditional Chinese Medicine [221–223], contemporary drug discovery technologies and philosophies are being increasingly used in a more rigorous, qualitative context to identify specific components of these ancient medicines and to define their targets in order to assess the specific basis of their therapeutic effects. A significant number of natural products are uniquely produced by microbes interacting with their host, and thus contingent on a chemical symbiosis that offers new avenues of combinatorial natural product discoveries [224]. However, while many natural products have an extensive history as therapeutics, this has not always been replicated in double-blind randomized clinical trials (RCTs), e.g., saw palmetto in benign prostatic hyperplasia [225], St. John’s Wort (Hypericum perforatum) in depression [226] making the isolation of single, presumably active, entities still a risky proposition in terms of clinical translatability. Additionally, while the increased interest in traditional Chinese remedies has led a large increase in the number of publications (using the search term ‘‘Chinese Traditional Medicine’’ in PubMed there are 1940 articles from the year 2000 to the present compared to 270 articles in the preceding 30 years), these are often a catalog of biological activities with little discrimination, interrogation or integration of the outcomes, tending to obfuscate rather than clarify the field. Thus their utility as starting points for NCE based drug discovery efforts is a challenging task as the multiple targets inevitably identified, usually in vitro, are neither prioritizable from one another nor distinguishable from one compound to the next. This can be illustrated from a comprehensive review on the potential utility of phytochemicals in cancer [227]. One compound – epigallocatechin-3-gallate (EGCG) – an antioxidant present in green tea that may have therapeutic potential in the treatment of cancer, neurodegeneration and HIV, has a tabulated list of activities that runs to two full journal pages that includes activities against a variety of topical targets including: ERK1/2, AP-1, NF-kB, I-kB, VEGF, Akt, prostaglandins, p21, p27, p53, caspases 3, 8 and 9, Bax/ Bcl2, p38 kinase, PI3K, cyclin D1, cdk2, cdk4/6, PDGFR, FGFR, DNA methylation, STAT, etc. In another example, curcumin the yellow pigment in curry powder (tumeric) that has reported antiinflammatory effects that may be beneficial in the treatment of cancer, neurodegeneration, HIV and a variety of other disease states that have an inflammatory component has approximately 50 identified molecular targets [228] that like EGCG include NF-kB, AP-1, Bax/Bcl2 etc. Moreover, many of these in vitro activities have been observed at the micromolar level, while peak plasma levels in vivo can be greater than 10-fold lower due to bioavailability issues, making it difficult to establish if particular activities have any pharmacological relevance. The utilization of appropriately controlled experiments to discriminate between and prioritize these activities to provide some functional and potency perspective and consistency is a rare event, Deciphering this plethora of activities to determine how these two exemplar compounds may be acting in any given situation is a perplexing Gordian knot. As often seems the case with publications in this field, whichever mechanism is being investigated seems to be the critical one of interest while the rest are ignored. Nonetheless, despite the issues surrounding selectivity, specificity, potency and chemical complexity that frequently fail to conform to Lipinski’s Rule of 5 [229], they are a unique source of novel pharmacophores with unique activity profiles, e.g., morphine, staurosporine, taxol, and tacrolimus. While natural products continue to be the basis for new drugs [230] especially antibiotics [231], their use in mainstream drug discovery has declined [224]. While this may be partly due to difficulties in interpreting the science to create a logical path

Please cite this article in press as: Mullane K, et al. Translational paradigms in pharmacology and drug discovery. Biochem Pharmacol (2013), http://dx.doi.org/10.1016/j.bcp.2013.10.019

G Model

BCP-11799; No. of Pages 22 K. Mullane et al. / Biochemical Pharmacology xxx (2013) xxx–xxx

forward, it also reflects an aspect of the reductionistic molecular approach currently in vogue. And while HTS of natural product libraries [232] can complement that of small molecules, the repetitive finding of multiple hits that overlap between natural products may have questionable value. 5.5. Animal models and their predictive value The contribution of animal models in understanding human disease states and their treatment has been a long-standing and contentious issue [20,233–235]. While animal models in the areas of inflammation [236], asthma [237], cancer [238] and CNS diseases [239] all have their various benefits and limitations from a therapeutic area perspective [20] as in vivo models, collectively they represent the only PK/PD link to aid in the translation of NCEs to human by providing critical information on NCE exposure, safety and toxicology. The latter usually conducted in animal models of safety include measurement of peak and steady state plasma levels of NCE with both acute and chronic dosing paradigms, protein binding, tissue accumulation, clearance, volume of distribution and target engagement when administered by different routes; metabolism including the formation of active metabolites; and the effects of long term NCE exposure on various organ systems [21,240–242]. 6. Hierarchy in advancing targets and therapeutics Historically, academic research projects and industry drug discovery programs have adopted a hierarchical approach to advancing the science – e.g., from selecting and ‘‘validating’’ targets, compound screening at the isolated receptor/enzyme/ target level, then in cells, to a cell-based functional assay, leading, hopefully with some intervening pharmacokinetic determinations, to an in vivo model as the ultimate preclinical determinant of potential clinical activity and safety of a New Chemical Entity (NCE). Various newer technologies have been incorporated into this hierarchical sequence as they became available, the majority of which have been related to in vitro testing. While animal models – including those used for efficacy (measuring compound effects on cardiovascular, respiratory and immune system function, glucose homeostasis, attenuation of weight gain and tumor growth, depression, anhedonia, addiction, etc.), ICH7 safety [243], ADME work and toxicology – are typically the final arbiter of a decision to move a compound into clinical trials – they have had mixed value especially as related to efficacy. While this has contributed significantly to the poor translation of preclinical science to human testing, the confidence placed in these models is misplaced and naı¨ve unless they are placed in context with other sources of data sources as part of a systematic, objective and transparent review and, where necessary, repeat testing. 6.1. Stroke In the stroke area from more than 900 putative neuroprotective treatments that showed benefit in animal models, mainly the MCAO (middle cerebral artery occlusion) model in gerbil or rat, 114 were examined in clinical trials in which aspirin and thrombolytics (e.g., alteplase, rTPA) had shown robust efficacy [244,245], yet none was efficacious [246,247]. Subsequent analysis [248] identified several variables, including timing of NCE administration as well as age, comorbidities and physiological status, as contributing to the disparity between findings from the animal models and the clinical trial outcomes reflecting bias in the preclinical models that resulted in an ‘‘overstatement of neuroprotective efficacy’’ [248–250]. That these models are still being used in preclinical research more than a decade after their total

15

lack of translational value was reported questions the logic of their use other than a means to publish. 6.2. Additional therapeutic areas Similar translational failures have been reported in trials for acute myocardial infarction, asthma, various inflammatory disorders, osteoporosis [244], diabetes [251], and a raft of CNS indications [239,252]. In a large systematic study [244] that compared the results from animal studies for a number of interventions for which there was unambiguous evidence for a clinical effect, in many instances the results in animals were opposite to those seen clinically. As noted, what is equally surprising is the extent to which such models continue to be used, and the ensuing results interpreted as meaningful, irrespective of their inherent limitations and past history. 6.3. Considerations There are few therapeutic areas (if indeed any) where proponents are not currently scrambling to improve both the translational capabilities of existing in vivo models, wild type, genetically, chemically and surgically manipulated, especially their temporal relevance to the human condition they are supposedly reflective of and their effective replication between research groups – yet another source of discord and confusion. While both of these objectives have merit, only the former is likely to impact patients and contribute to the success of drug discovery, while the other may only perpetuate un-interpretable science on a broader scale. While genetically engineered rodents lacking or overexpressing a putative therapeutic target were thought to be a viable solution to improving animals models, these have similar issues with translation as the historical models, in many instances because they are not models of a disease state but rather of a specific target manipulation thought to be involved in that disease. Additional issues of translatability include species and strain specificity of the disease phenotypes [253], with differing background genetics and response to environmental cues that can modify the phenotype and discrepancies in both the intensity of the trauma or gene manipulation used to create the disease and its temporal relationship – in terms of time to detectable disease onset and the ‘‘pre-incubation state’’. As an example historical chronic pain models in animals are evoked by surgery or chemicals and interrogated using external provocation. In contrast, in human, time to onset of a chronic pain state is subtle and prolonged reflecting extensive neuronal rewiring in both the periphery and the CNS with pain being present in the absence of any evoked stimulus, or even in the absence of the affected limb [254]. 6.4. Following the clinical path While considerable efforts are ongoing to standardize models, improve replication and relevance to the clinic, the literature is replete with contradictory findings that do not aid in interpretation or progress science. Clinical science 20–30 years ago suffered similar problems of data reproducibility, validation and interpretation that reflected a ‘‘marginalization of clinical science’’ [255] coupled with a loss of serendipity [256] and an intellectual context that ‘‘lost its capacity to make substantial contributions’’ [255]. This has largely been overcome by implementing sound practices that include blinding, sample size determinations, randomization, appropriate statistical analyses, management of confounding factors, prospectively defining how outliers are managed, etc. the majority of which reflect a greater contribution to bias in preclinical than in clinical studies [257]. More recently however, similar recommendations have been made in the preclinical area

Please cite this article in press as: Mullane K, et al. Translational paradigms in pharmacology and drug discovery. Biochem Pharmacol (2013), http://dx.doi.org/10.1016/j.bcp.2013.10.019

G Model

BCP-11799; No. of Pages 22 16

K. Mullane et al. / Biochemical Pharmacology xxx (2013) xxx–xxx

[107,258,259] prompted by both clinical failures and the lack of replicability of many studies [35,36]. It would appear logical that the more robust the response (different species, settings, time points, endpoints) the more likely the response is real and potentially translatable, although confirmatory data supporting this premise is largely missing. Nonetheless the FDA has subscribed, at least in part, to this logic since in response to the potential threat of bioterrorism they issued a ruling that medical countermeasures could be approved based on efficacy in multiple species and safety data in animals and humans, but did not require any demonstrations of clinical efficacy. Despite this liberal opportunity, only two products were actually licensed, both of which had already been approved for other indications [257], begging the question as to the extent other treatments could not even meet the requirement to show reproducible effects across species. In regard to model standardization and replication, it has suggested that systematic reviews/meta-analyses of preclinical studies, including both positive and neutral/negative outcomes should become part of the Cochrane Library [260]. 6.5. Pluripotent stem cells as disease models The fact that rodent models of human disease states do not faithfully recapitulate key features of the disease or responses to potential therapeutic interventions is not surprising given that rodents and humans diverged in evolution approximately 96 million years ago. The cumulative effect of small and infrequent polymorphisms and other genetic alterations over such a period of time, could be expected to radically alter how a cell responds to stressors in terms of activating or deactivating disease-causing pathways or compensatory mechanisms, and how an NCE interacts with such pathways. Similarly, cells derived from rodent models show the same limitations, possibly accounting for why findings with NCEs have not translated successfully into the human condition. Recognition that the best model of human is human led to the widespread use of human transformed cell-lines obtained from cancer patients. However, interpretation of studies using such celllines has also been fraught with difficulties and misrepresentative. How these cell-lines have changed over the years and multiple passages to which they have been subjected has been well documented. For example, the HeLa cell-line no longer contains the personal genome of Henrietta Lacks – indeed it contains 70–90 chromosomes rather than the usual complement of 46, and can be argued to no longer represent a human genome [261], so it is unsurprising that many of the findings using these cells have limited value. Similarly the cell lines present in the widely used NCI 60 screening panel that is used to test potential NCEs were found to have altered gene expression profiles such that they better resembled one another, irrespective of the tissue of origin, than clinical tumor samples, with all the cultured cell lines showing an up-regulation of genes that facilitated survival [238,262] The discovery that human fibroblasts could be reprogrammed using transcription factors into self-renewing pluripotent stem cells (iPSCs) that have many of the properties of embryonic stem cells has the potential to alter the way in which human diseases can be studied [263]. iPSCs can be differentiated into nearly all cell lineage types, including those unavailable as cultured cell-lines e.g., adipocytes and motor neurons. When derived from diseased patients they contain the genetic make-up underlying the disease phenotype thus representing the most genetically accurate model of the disease [264–266]. Remarkably, patient-specific iPSCs not only recapitulate the phenotype of monogenic disorders, but also that of late-onset polygenic diseases such as Parkinson’s disease and HD. The iPSC technology while at an early stage with new

developments appearing regularly has considerable translational potential. Phenotypic variability exists between different cell-lines from patients with disease, as well as those of unmatched healthy individuals to which they are often compared. This is due to differences in genetic background and epigenetic state, among other factors, but serves to complicate interpretation of the relationship between genotype and phenotype. This confusion is potentially amplified by the current tendency (for reasons of cost and expediency) to compare a limited number of patient-derived cell-lines (sometimes just one) with an equally small number of those created from healthy subjects. Gene-editing technologies such as zinc-finger nucleases, TALENs (transcription activator-like effector nucleases) and CRISPRs (cluster regularly interspaced short palindromic repeats) [267,268] can be applied to create isogenic cell-lines with and without the disease mutation on a common genetic background to determine the specific contribution of the mutation to the ensuing phenotype. These techniques permit interrogation of a large number of variants, and modifier genes, on a ‘‘disease’’ or ‘‘healthy’’ genetic background. In drug discovery, cardiomyocytes (CMs) derived from iPSCS from patients with cardiac arrhythmic disturbances are being used successfully to model the pathophysiology of long Q-T syndrome and catecholaminergic polymorphic ventricular tachycardia [269,270]. A recent FDA-led drug safety initiative seeks to revamp the process by which NCEs are evaluated for cardiotoxic potential, in large part by utilizing iPSC-CM models in conjunction with computational modeling to provide a greater level of detail at less cost [271]. iPSCs transformed to neurons have also been used to study sporadic, late-onset neurodegenerative diseases including HD [272] and PD [273,274]. Screening in an iPSC model of familial dysautonomia (FD [275]) led to the identification of the a2 adrenoceptor antagonist, SKF-86466 that could reverse the disease-specific loss of autonomic neuronal marker expression via its ability to induce IKBKAP transcription through modulation of intracellular cAMP levels and PKA-dependent CREB phosphorylation. While iPSCs represent a novel and promising technology utilizing human somatic cell-derived disease models that may have the potential to inform and facilitate the translational process, they remain a reductionistic system, focusing on a single cell type when complex diseases probably involve multiple cells with differing interactions and a level of complexity that cannot be reproduced readily in vitro. The value of the iPSC technology, like animal data, still requires interpretation in a systematic context with other in vitro and in vivo data. 7. Revisiting translation The disappointments of the T1 translation process in drug discovery have led to concerns as to whether the ambitious goals in the area and the lack of substantial progress to date reflect the challenges of the science or the lack of sufficient effort. Like activities in qualifying biomarkers [17,189,276] and targets (see section 5) those in the translational sciences are often more consistent with ‘‘wishing and hoping’’ rather than fact. While the term translational research is widely used to qualify all manner of research, it has often been viewed as a buzz or catch word [277,278] with instances of successful translation being mostly the result of post hoc hypothesis testing, rather than a priori hypothesis creation [279]. Wehling has noted that ‘‘the concept does not exist apart from general claims and attributes, and no robust structures, such as toolboxes, algorithms, reproducible standards and procedures, and assessment tools have been developed and/or implemented. Translational medicine might be a clue to the survival

Please cite this article in press as: Mullane K, et al. Translational paradigms in pharmacology and drug discovery. Biochem Pharmacol (2013), http://dx.doi.org/10.1016/j.bcp.2013.10.019

G Model

BCP-11799; No. of Pages 22 K. Mullane et al. / Biochemical Pharmacology xxx (2013) xxx–xxx

of biomedical research, but it needs to be filled with scientific and operational substance’’ [278]. Key facets of an effective translational approach in addition to a systematic approach [279] include: (i) biomarker development [276]; (ii) a structured translatability assessment that provides a transparency in risk assessment predetermined decision trees and algorithms [279,280]; (iii) ‘‘non-adaptive’’ clinical trials [238] that include Phase 0 studies [281]; (iv) structured planning; and (v) efficient networking/interfacing procedures for the multiple disciplines involved [279]. 7.1. Alzheimer’s biomarkers The key role of biomarkers is an issue that adds to the intricacy of the translational process as the logic behind the predictability of some assays is not readily apparent. One example is in AD and involves measuring CSF amyloid peptide and amyloid deposits in brain (Ab40 and Ab42) [282] and an ‘‘AD signature’’, a combination Ab42 and p-tau181 [283]. In the former, using PET scanning in a cohort of 43 adults aged 65–88 without clinical AD, 9 were identified with brain b-amyloid deposits who in terms of cognitive function were equivalent to 29 individuals without amyloid deposits with the remaining 5 subjects having ‘‘intermediate’’ evidence of amyloid deposition [282]. The authors concluded that the fact that individuals with ‘‘significant amyloid burden’’ were cognitively normal was due to either a high level of ‘‘cognitive reserve’’ or, more importantly, that amyloid deposits were insufficient to cause AD. The use of PET scanning as a diagnostic for AD is questionable as the US Centers for Medicare & Medicaid Services (CMS) concluded [284] that ‘‘evidence is insufficient to conclude that the use of . . .(PET) amyloid-beta (Ab) imaging is reasonable and necessary for the diagnosis or treatment of illness or injury or to improve the functioning of a malformed body member for Medicare beneficiaries with dementia or neurodegenerative disease’’. Similarly, the CSF biomarker signature was found in 90%, 72%, and 36% of patients with AD, mild cognitive impairment (MCI), and normal cognition, respectively [283]. The presence of the AD signature in cognitively normal subjects was optimistically interpreted as an indication of AD pathology being present and earlier in disease progression. In both instances, data that potentially questioned the validity of the respective CSF assays was retrospectively determined a positive attribute rather than raising issues as to whether the results in both instances were false positives. Certainly from a prospective viewpoint both assays lacked the necessary precision to accurately diagnose patients. 7.2. Translatability scoring Translatability scoring involves a series of criteria to support advancement of NCEs in the drug discovery process [280]. Its value was reflected in a retrospective analysis of eight clinical candidates based on these criteria [285] where a score of greater than 4 was ‘‘indicative of fair to good translatability and low risk’’ [280]. Of the eight candidates, four – dabigatran (thrombin inhibitor), ipilimumab (monclonal for metastatic malignant melanoma), gefitinib (RTK inhibitor for non-small cell lung cancer) and varenicline (nicotinic cholinergic receptor partial agonist for smoking cessation) had translatability scores close to 4 while torcetrapib (CETP (cholesteryl ester transfer protein) inhibitor for hypercholesterolemia) and vilazodone (antidepressant) had scores around 2. The final two therapeutics, latrepirdine (repurposed antihistamine) and semagacestat (g-secretase inhibitor), both for the treatment of AD had scores close to 1 reflecting the translational risk in the area with at least 8 late stage clinical candidates compounds having failed in the past decade [110].

17

7.3. Clopidogrel – bidirectional translation Before the era of overt reductionism in biomedical research, NCEs were often advanced to clinical trials and approved based on their ability to reverse or alter an established disease phenotype typically in the absence of an identified mechanism of action. Thus in the CNS area, antidepressants, anxiolytics and antipsychotics were approved for human use long before their putative mechanisms of actions were established [207,209], often by serendipity [256]. Similarly, in the area of thrombolytics, clopidogrel was identified as an irreversible antagonist of ADP-induced platelet aggregation [286] that was as efficacious as, and safer than, aspirin although its mechanism of action (MoA) was unknown [287]. In the absence of a MoA the clinical dose and its timing was developed on trial and error basis with some patients showing both variability in response and resistance to the drug. At the same time, clopidogrel was, (a) found to be a prodrug with formation of its active metabolite being dependent on CYP2C19 activity and, (b) an antagonist of the P2Y12 receptor [288]. With this information, variability in patent response to clopidogrel was found to involve both loss-of-function alleles in CYP2C19 and polymorphisms in the P2Y12 receptor [287]. This information led to the development of predictive tests (e.g., VASP – vasodilator-stimulated phosphoprotein phosphorylation) to guide initial loading doses for clopidogrel and to aid in the development of second generation P2Y12 receptor antagonists, e.g., prasugrel, ticagrelor, cangrelor, elinogrel. 7.4. NXY-059 In a meta-analysis of animal data [249], the clinical failure of the free radical scavenger, NXY-059, in large trials (5028 patients) in acute ischemic stroke [289,290] was assessed in the context of the positive preclinical data that included reduced infarct volume and motor impairment in experimental stroke models (transient, permanent and thrombotic) in rodents, rabbits and primates. Analysis of the data from 585 animals (NXY-treated 332, control, 253) from mice, rats and marmosets that originated from 12 laboratories reflecting 26 experiments, four of which were unpublished, showed that NXY-059 was neuroprotective in preclinical models that met the established STAIR (Stroke Therapy Academic Industry Roundtable) criteria. There was evidence however that performance, attrition and publication bias was present in the preclinical studies that were reviewed [249]. Interestingly, while spontaneously hypertensive rat (SHR) models of stroke were included in the meta-analysis, NXY-059 was only found to be effective in normotensive rats. Additionally, sample size calculations were absent from all the studies. While considering that the discrepancy between the preclinical and clinical data could have been the result of: (i) a lack of relevance of the preclinical data to the human situation; (ii) efficacious doses in rats and marmosets not being predictive of the human situation and; (iii) issues with brain access of the free radical scavenger; the meta-analysis concluded that because of bias, the preclinical efficacy of NXY-059 may have been overestimated. Based on these conclusions which were in general consistent with a similar analysis of data on NXY-059 [248,250], the authors recommended that meta-analysis of all available preclinical data on an NCE be conducted before the initiation of clinical trials. 7.5. Statin translatability Assessment of the translational efficacy – the correlation between the efficacy for the second and third generation statins, atorvastatin, simvastatin, lovastatin, pravastatin and rosuvastatin, that produce their cholesterol-lowering effects via inhibition of

Please cite this article in press as: Mullane K, et al. Translational paradigms in pharmacology and drug discovery. Biochem Pharmacol (2013), http://dx.doi.org/10.1016/j.bcp.2013.10.019

G Model

BCP-11799; No. of Pages 22 18

K. Mullane et al. / Biochemical Pharmacology xxx (2013) xxx–xxx

HMG-CoA reductase in the ApoE*3 Leiden (E3L) transgenic mouse (a model of familial hyperlipidemia and cholesterol-induced atherosclerosis) and in lowering plasma cholesterol in humans was not significant (R2 = 0.11, p < 0.57) [291]. However when the mouse data was adjusted for hepatic uptake the correlation was significant (R2 = 0.89, p < 0.05) emphasizing the need to consider more than potency, efficacy and compound half-life to predict human efficacy.

is unimpeachable in its quality and content, irrespective of its contributions and final meaning. This framework is more than adequately represented by the integrative discipline of pharmacology [8], efforts and training in which while recognized as limited, need to be revitalized to ensure that research is more effectively and efficiently translated to therapeutics and improved healthcare. References

8. Future directions The translational sciences represent a central and scientifically challenging activity that has the potential – and the mandate – to focus and enrich the output from the biomedical sciences while at the same time improving drug discovery metrics [3,4,24,46,91,278–280,285]. Much has been written on translation with several new journals devoted to the topic. However, progress has been hampered by semantic issues and a unfortunate complacency in the implementation of the science, the latter including a decline in the quality of its execution and its objective and critical analysis. Superimposed on these factors are issues related to the health of biomedical research in academia (funding), industry (consolidation) or both (scientific culture/data replicability) as well as political and societal concerns regarding the increasing cost of health care due to chronic disease states (autoimmune and neurodegenerative diseases, diabetes, nonmorbid obesity, depression/stress), population growth and increased life expectancy [292]. Key to understanding disease causality and developing safe and effective therapeutics is the ability to accurately diagnose both the disease and its response to treatment. The former requires validated biomarkers [276] and the latter, qualified targets [193]. In both instances, progress has been confounded by semantics related to defining the end product (not the least of which is the misuse of the qualifier, innovative), by ‘‘big science’’, database-oriented initiatives [17] that presume success based on their own existence rather than data, by a lack of objective and systematic collection and assessment of required evidence and by commercial interests that often distort both the science and its reporting. Added to these challenges are a multiplicity of issues related to the culture of 21st century biomedical science that involve experimental design, bias and data reliability, the latter a topic that while of recent interest [34–39] has been a long standing issue in the effective transitioning of basic research to the applied arena [293]. The finding [249] that bias was a contributing factor in overestimating the preclinical efficacy of the free radical scavenger, NXY-059, in preclinical models of stroke argues for a more detailed analysis of all available data before initiating trials. Although a logical and perhaps mandatory approach to the translational process, its practical implementation may be challenging especially with the reluctance of some investigators to provide access to data [249] and the cultural aspects of N-I-H (not-invented-here) syndrome [294] both of which waste resources, impair decision making and innovation [295]. Casadevall and Fang [296–298] have recently taken as a theme a need to reform some aspects of contemporary science. They have identified issues that include the current metrics used for awarding grants for basic research, the limited funding available, the cult of the self-promoting scientific entrepreneur, the adequacy of training of newly emerging scientists (especially in the statistical sciences) and concerns regarding the continued competitiveness and leadership of the US in global biomedical science. Without appropriate training, mentoring, competence and ethical transparency in basic research, the ability to conduct successful translational research becomes limited. However, the latter requires a systematic framework free of overt bias and data that

[1] Moses III H, Martin JB. Biomedical research and health advances. N Engl J Med 2011;364:567–71. [2] Horrobin DF. Modern biomedical research: an internally self-consistent universe with little contact with medical reality? Nat Rev Drug Discov 2003;2:151–4. [3] Sung NS, Crowley Jr WF, Genel M, Salber P, Sandy L, Sherwood LM, et al. Central challenges facing the national clinical research enterprise. J Am Med Assoc 2003;289:1278–87. [4] Woolf SH. The meaning of translational research and why it matters. J Am Med Assoc 2008;299:211–3. [5] Helfand M, Tunis S, Whitlock EP, Pauker SG, Basu A, Chilingerian J, et al. A CTSA agenda to advance methods for comparative effectiveness research. Clin Transl Sci 2011;4:188–98. [6] Morgan P, Van Der Graaf PH, Arrowsmith J, Feltner DE, Drummond KS, Wegner CD, et al. Can the flow of medicines be improved? Fundamental pharmacokinetic and pharmacological principles toward improving Phase II survival. Drug Discov Today 2012;17:419–24. [7] ASPET Division for Integrative Systems. Translational and clinical pharmacology mission statement. http://www.aspet.org/ISTCP/Home/ [accessed 11.09.13]. [8] Winquist RJ, Mullane KM, Williams M. The fall and rise of pharmacology – (re-) defining the discipline? Biochem Pharmacol 2014 [in this issue]. [9] Dorsey ER, Thompson JP, Carrasco M, de Roulet J, Vitticore P, Nicholson S, et al. Financing of U.S. Biomedical Research and New drug approvals across therapeutic areas. PLoS ONE 2009;4:e7015. [10] Leeson PD, Springthorpe B. The influence of drug-like concepts on decisionmaking in medicinal chemistry. Nat Rev Drug Discov 2007;6:881–90. [11] Sun X, Vilar S, Tatonetti NP. High-throughput methods for combinatorial drug discovery. Sci Transl Med 2013;5:205rv1. [12] Bennani YL. Drug discovery in the next decade: innovation needed ASAP. Drug Discov Today 2012;16/17:779–92. [13] Imming P, Sinning C, Meyer A. Drugs, their targets and the nature and number of drug targets. Nat Rev Drug Discov 2006;5:821–34. [14] Swinney DC, Anthony J. How were new medicines discovered? Nat Rev Drug Discov 2011;10:507–19. [15] Swinney DC. Phenotypic vs. target-based drug discovery for first-in-class medicines. Clin Pharmacol Ther 2013;93:299–301. [16] Schenone M, Dancˇı´k V, Wagner BK, Clemons PA. Target identification and mechanism of action in chemical biology and drug discovery. Nat Chem Biol 2013;9:232–40. [17] Anderson DC, Kodulka K. Biomarkers in pharmacology. Biochem Pharmacol 2014 [in this issue]. [18] Simon GM, Niphakis MJ, Cravatt BF. Determining target engagement in living systems. Nat Chem Biol 2013;9:200–5. [19] Visser SAG, Aurell M, Jones RDO, Schuck VJA, Egnell A-C, Sheila A, et al. Modelbased drug discovery: implementation and impact. Drug Discov Today 2013;18:764–75. [20] McGonigle P, Ruggeri B. Animal model utility in drug discovery. Biochem Pharmacol 2014 [in this issue]. [21] Fan J, de Lannoy IAM. Pharmacokinetics in pharmacology. Biochem Pharmacol 2014 [in this issue]. [22] Steinmetz Kl, Spack EG. The basics of preclinical drug development for neurodegenerative disease indications. BMC Neurol 2009;9(Suppl. 1):S2. [23] Ewart L, Gallacher DJ, Gintant G, Guillon J-M, Leishman D, Levesque P, et al. How do the top 12 pharmaceutical companies operate safety pharmacology? J Pharmacol Toxicol Methods 2012;66:66–70. [24] Liebman MN, Marincola FM. Expanding the perspective of translational medicine: the value of observational data. J Transl Med 2012;10:61. [25] Hughes JP, Rees S, Kalindjian SB, Philpott KL. Principles of early drug discovery. Br J Pharmacol 2011;162:1239–49. [26] Cooke P. Regional innovation systems: general findings and some new evidence from biotechnology clusters. J Technol Transfer 2002;27: 133–45. [27] Benneworth P, Hospers G-J. The new economic geography of old industrial regions: universities as global–local pipelines. Environ Plann C Govern Policy 2007;25:779–802. [28] Feldman M, Romanelli E. Organizational legacy and the internal dynamics of clusters: the U.S. Human Biotherapeutics Industry, 1976–2002. In: Meusburger P, Glu¨ckler J, el Meskioui M, editors. Knowledge and space, vol. 5. Dordrecht, Germany: Springer Science + Business Media; 2013. p. 207–30. [29] Elkind P, Reingold J, Burke D. Inside Pfizer’s palace coup. Fortune; 2011. http://features.blogs.fortune.cnn.com/2011/07/28/pfizer-jeff-kindler-shakeup/.

Please cite this article in press as: Mullane K, et al. Translational paradigms in pharmacology and drug discovery. Biochem Pharmacol (2013), http://dx.doi.org/10.1016/j.bcp.2013.10.019

G Model

BCP-11799; No. of Pages 22 K. Mullane et al. / Biochemical Pharmacology xxx (2013) xxx–xxx [30] Ho¨rig H, Pullman W. From bench to clinic and back: perspective on the 1st IQPC Translational Research conference. J Transl Med 2004;2:44. [31] Menand L. The marketplace of ideas. Forbes; 2009. http://Forbes.com/2009/ 08/02/university-education-reform-opinions-colleges-09-louis-menand. html. [32] Crichton M. quoted in Loscalzo J. Experimental irreproducibility: causes, (mis)interpretations, and consequences. Circulation 2012;125:1211–4. [33] Kenakin T, Bylund DB, Toews ML, Mullane M, Winquist RJ, Williams M. Replicated, replicable and relevant – target engagement and pharmacological experimentation in the 21st Century. Biochem Pharmacol 2014 [in this issue]. [34] Ioannidis JPA. Why most published research findings are false. PLoS Med 2005;e124. [35] Prinz F, Schlange T, Asadullah K. Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Discov 2011;10:712– 3. [36] Begley CG, Ellis LM. Drug development: raise standards for preclinical cancer research. Nature 2012;483:531–3. [37] Loscalzo J. Experimental irreproducibility: causes, (mis)interpretations, and consequences. Circulation 2012;125:1211–4. [38] Fanelli D. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE 2009;4:e5738. [39] Freedman DH. Lies, damned lies, and medical science. The Atlantic; 2010. http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-liesand-medical-science/308269/. [40] Kakuk P. The legacy of the hwang case: research misconduct in biosciences. Sci Engineer Ethics 2009;1:645–62. [41] Fang FC, Steen GR, Casadevall A. Misconduct accounts for the majority of retracted scientific publications. Proc Natl Acad Sci USA 2012;109: 17028–33. [42] Bhattacharjee Y. The mind of a con man, new york times magazine; 2013. http://www.nytimes.com/2013/04/28/magazine/diederik-stapels-audacious-academic-fraud.html?pagewanted=all&_r=1&. [43] Steen RG. Retractions in the scientific literature: is the incidence of research fraud increasing? J Med Ethics 2011;37:249–53. [44] Steen RG, Casadevall A, Fang FC. Why has the number of scientific retractions increased? PLoS ONE 2013;8:e68397. [45] Ioannidis JPA. Why science is not necessarily self-correcting. Perspect Psychol Sci 2012;7:645–54. [46] Rosenblatt M. How academia and the pharmaceutical industry can work together. Ann Am Thorac Soc 2013;10:31–8. [47] Borrel B. A medical Madoff: anesthesiologist faked data in 21 studies. Sci Am; 2009. http://www.scientificamerican.com/article.cfm?id=a-medical-madoff-anesthestesiologist-faked-data. [48] Deer B. How the case against the MMR vaccine was fixed. Br Med J 2011;342:c5347. [49] Godlee F, Smith J, Marcovitch H. Wakefield’s article linking MMR vaccine and autism was fraudulent. Br Med J 2011;342:c7452. [50] Lazebnik Y. Can a biologist fix a radio? Or, what I learned while studying apoptosis. Cancer Cell 2002;2:179–82. [51] Brenner S. An interview with. . . Sydney Brenner, Interview by Errol C. Friedberg. Nat Rev Mol Cell Biol 2008;9:8–9. [52] Kubinyi H. Drug research: myths, hype and reality. Nat Rev Drug Discov 2003;2:665–8. [53] Shaywitz D, Taleb N. Drug research needs serendipity. Financial Times; 2008 [http://www.ft.com/intl/cms/s/0/b735787c-5d9b-11dd-8129000077b07658.html. [54] Muchnik L, Aral S, Taylor SJ. Social influence bias: a randomized experiment. Science 2013;341:647–51. [55] FDA. Innovation or stagnation. In: Challenge and opportunity on the critical path to new medical products. Bethesda, MD: FDA; 2004 [http://www.fda. gov/oc/initiatives/criticalpath/whitepaper.html [accessed October 2013]. [56] Munos B. Lessons from 60 years of pharmaceutical innovation. Nat Rev Drug Discov 2009;8:959–68. [57] Pammolli F, Magazzini L, Riccaboni M. The productivity crisis in pharmaceutical R & D. Nat Rev Drug Discov 2011;10:428–38. [58] Light DW, Warburton R. Demythologizing the high costs of pharmaceutical research. BioSocieties 2011;6:34–50. [59] Mestre-Ferrandiz J, Sussex J, Towse A. The R & D cost of a new medicine. London, UK: Office of Health Economics; 2012. [60] Herper M. The cost of creating a new drug now $5 billion, pushing big pharma to change. Forbes; 2013 [http://www.forbes.com/sites/matthewherper/ 2013/08/11/how-the-staggering-cost-of-inventing-new-drugs-is-shapingthe-future-of-medicine/. [61] Scannell JW, Blanckley A, Boldon H, Warrington B. Diagnosing the decline in pharmaceutical R&D efficiency. Nat Rev Drug Discov 2012;11:191–200. [62] Williams M. Productivity shortfalls in drug discovery: contributions from the preclinical sciences? J Pharmacol Exp Ther 2011;336:3–8. [63] Sackett DL. Bias in analytic research. J Chronic Dis 1979;32:51–63. [64] Lehrer J. The science of irrationality. Wall street J; 2011 [http://online.wsj.com/article/SB10001424052970203633104576625071820638808.html. [65] Mandavilli A. Peer review: trial by twitter. Nature 2011;469:286–7. [66] Rossner M, Yamada KM. What’s in a picture? The temptation of image manipulation’’. J Cell Biol 2004;166:11–5. [67] Rossner M. How to guard against image fraud. The scientist; 2006 [http:// www.the-scientist.com/?articles.view/articleNo/23749/title/How-to-GuardAgainst-Image-Fraud/.

19

[68] Bylund DB, Toews M. Quantitative versus qualitative data – the numerical dimension in biomedical research. Biochem Pharmacol 2014 [in this issue]. [69] Kitazawa M. Circadian rhythms, metabolism, and insulin sensitivity: transcriptional networks in animal models. Curr Diabetes Reports 2013;13:223– 8. [70] Antle MC, Silver R. Orchestrating time: arrangements of the brain circadian clock. Trends Neurosci 2005;28:145–51. [71] Hastings M, O’Neill JS, Maywood ES. Circadian clocks: regulators of endocrine and metabolic rhythms. J Endocrinol 2007;195:187–98. [72] Revel FG, Gottowik J, Gatti S, Wettstein JG, Moreau J-L. Rodent models of insomnia: a review of experimental procedures that induce sleep disturbance. Neurosci Biobehavior Rev 2009;33:874–99. [73] Levi F, Schibler U. Circadian rhythms: mechanisms and therapeutic implications. Ann Rev Pharmacol Toxicol 2007;47:593–628. [74] Sack RL, Auckley D, Auger RR, Carskadon MA, Wright Jr KP, Vitiello MV, et al. Circadian rhythm sleep disorders: Part I, basic principles, shift work and jet lag disorders: an American Academy of Sleep Medicine review. Sleep 2007;30:1460–83. [75] Malhi GS, Kuiper S. Chronobiology of mood disorders. Acta Psychiatr Scand 2013;128(s444):2–15. [76] Le´vi F, Altinok A, Clairambault J, Goldbeter A. Implications of circadian clocks for the rhythmic delivery of cancer therapeutics. Philos Trans A Math Phys Eng Sci 2008;366:3575–98. [77] Farrow SN, Solari R, Willson TM. The importance of chronobiology to drug discovery. Exp Opin Drug Discov 2012;7:535–41. [78] Toth LA, Bhargava P. Animal models of sleep disorders. Comp Med 2013; 63:91–104. [79] Huang W, Ramsey KM, Marcheva B, Bass J. Circadian rhythms, sleep, and metabolism. J Clin Invest 2011;121:2133–41. [80] Kenakin T, Williams M. Defining and characterizing drug/compound function. Biochem Pharmacol 2014 [in this issue]. [81] McDonald JH. Handbook of biological statistics. 2nd ed. Baltimore, MD: Sparky House Publishing; 2009 . p. 15–20 [http://www.lulu.com/prod-

uct/18578349. [82] Bebarta V, Luyten D, Heard K. Emergency medicine animal research: does use of randomization and blinding affect the results? Acad Emerg Med 2003; 10:684–7. [83] Sean ES, van der Worp HB, Bath PMW, Howells DW, Macleod MR. Publication bias in reports of animal stroke studies leads to major overstatement of efficacy. PLoS Biol 2010;8:e1000344. [84] Marino M. The use and misuse of statistical methodologies in pharmacology research. Biochem Pharmacol 2014 [in this issue]. [85] Button KS, Ioannidis JP, Mokrysz C, Nosek BA, Flint J, Robinson ES, et al. Power failure: why small sample size undermines the reliability of neuroscience. Nat Rev Neurosci 2013;14:365–76. [86] Henderson VC, Kimmelman J, Fergusson D, Grimshaw JM, Hackam DG. Threats to validity in the design and conduct of preclinical efficacy studies: a systematic review of guidelines for in vivo animal experiments. PLoS Med 2013;e1001489. [87] Kilkenny C, Parsons N, Kadyszewski E, Festing MFW, Cuthill IC, et al. Survey of the quality of experimental design, statistical analysis and reporting of research using animals. PLoS ONE 2009;4:e7824. [88] Wadman M. NIH mulls rules for validating key results. Nature 2013;500:14– 6. [89] Collins FS, Green ED, Guttmacher AE, Guyer MS. US National Human Genome Research Institute. a vision for the future of genomics research. Nature 2003;422:835–47. [90] Collins FS. Has the revolution arrived? Nature 2010;464:674–5. [91] Mullane K, Williams M. Translational semantics and infrastructure: another search for the Emperor’s clothes. Drug Discov Today 2012;17:459–68. [92] Lowe D. Sloppy science. In The Pipeline;http://pipeline.corante.com/ archives/2012/03/29/sloppy_science.php. [93] Buntin MB, Burke MF, Hoaglin MC, Blumenthal D. The benefits of health information technology: a review of the recent literature shows predominantly positive results. Health Affair 2011;30:3464–71. [94] Tsilidis KK, Panagiotou OA, Sena ES, Aretouli E, Evangelou E, Howells DW, et al. Evaluation of excess significance bias in animal studies of neurological diseases. PLoS Biol 2013;11:e1001609. [95] Ioannidis JPA, Trikalinos TA. Early extreme contradictory estimates may appear in published research: the Proteus phenomenon in molecular genetics research and randomized trials. J Clin Epidemiol 2005;58:543–9. [96] Pfeiffer T, Bertram L, Ioannidis JPA. Quantifying selective reporting and the Proteus phenomenon for multiple datasets with similar bias. PLoS ONE 2011;6:e18362. [97] Ioannidis JP. Excess significance bias in the literature on brain volume abnormalities. Arch Gen Psychiatry 2011;68:773–80. [98] Mobley A, Linder SK, Braeuer R, Ellis LM, Zwelling L. A survey on data reproducibility in cancer research provides insights into our limited ability to translate findings from the laboratory to the clinic. PLoS ONE 2013;8:e63221. [99] Editorial. Science Publishing: how to stop plagiarism. Nature 2012; 481:21– 3. [100] Martin BR. Whither research integrity? Plagiarism, self-plagiarism and coercive citation in an age of research assessment. Res Policy 2013;42:1005–14. [101] Stemwedel JD. The continuum between outright fraud and ‘‘sloppy science’’: inside the frauds of Diederik Stapel (part 5). Sci Am; 2013. http://blogs.

Please cite this article in press as: Mullane K, et al. Translational paradigms in pharmacology and drug discovery. Biochem Pharmacol (2013), http://dx.doi.org/10.1016/j.bcp.2013.10.019

G Model

BCP-11799; No. of Pages 22 20

[102]

[103] [104]

[105]

[106] [107] [108]

[109]

[110]

[111]

[112]

[113]

[114]

[115] [116] [117]

[118] [119]

[120] [121]

[122]

[123] [124]

[125]

[126] [127] [128] [129] [130] [131]

[132] [133] [134]

K. Mullane et al. / Biochemical Pharmacology xxx (2013) xxx–xxx scientificamerican.com/doing-good-science/2013/06/26/the-continuumbetween-outright-fraud-and-sloppy-science-inside-the-frauds-of-diederikstapel-part-5. Baggerly KA, Coombes KR. Deriving chemosensitivity from cell lines: forensic bioinformatics and reproducible research in high-throughput biology. Ann Appl Stat 2009;3:1309–34. Couzin-Frankel J. As questions grow, Duke halts trials, launches investigation. Science 2010;329:614–5. Retraction Watch, The importance of being reproducible: Keith Baggerly tells the Anil Potti story, Retraction Watch; 2011. http://retractionwatch. wordpress.com/2011/05/04/the-importance-of-being-reproducible-keithbaggerly-tells-the-anil-potti-story/. Lowe D. New frontiers in analytical chemistry. In the pipeline; 2013, http:// pipeline.corante.com/archives/2013/08/07/new_frontiers_in_analytical_ chemistry.php. Smith III A. Data integrity. Org Lett 2013;15:2893–4. Goldacre B. Bad medicine. London: Fourth Estate; 2012. p. 1–99. Eyding D, Lelgemann M, Grouven U, Harter M, Kromp M, Kaiser T, et al. Reboxetine for acute treatment of major depression: systematic review and meta-analysis of published and unpublished placebo and selective serotonin reuptake inhibitor controlled trials. Br Med J 2010;341:4737. Doshi P, Dickersin K, Healy D, Vedula SW, Jefferson T. Restoring invisible and abandoned trials: a call for people to publish the findings. Br Med J 2013;346:f2865. Mullane K, Williams M. Alzheimer’s therapeutics: continued clinical failures question the validity of the amyloid hypothesis – but what lies beyond? Biochem Pharmacol 2013;85:289–305. Naik G. Mistakes in scientific studies surge. Wall St J; 2011 [http://online. wsj.com/article/ SB10001424052702303627104576411850666582080.html. Begley S. In cancer science, many ‘‘discoveries’’ don’t hold up. Reuters; 2012. http://www.reuters.com/article/2012/03/28/us-science-cancer-idUSBRE82R12P20120328. Accenture and CMR International. Rethinking innovation in pharmaceutical R&D; 2005. http://www.accenture.com/Microsites/rdtransformation/Documents/PDFs/Accenture_Rethinking_Innovation_in_Pharmaceutical.pdf. Horton R. quoted in Naik G. Mistakes in scientific studies surge. Wall St J; 2011. http://online.wsj.com/article/SB10001424052702303627104576411850666582 080.html. Lee CJ, Sugimoto CR, Zhang G, Cronin B. Bias in peer review. J Am Soc Inform Sci Technol 2013;64:2–17. Editorial. Reducing our irreproducibility. Nature 2013;496:398. Boakes EH, McGowan PJK, Fuller RA, Chang-qing D, Clark NE, et al. Distorted views of biodiversity: spatial and temporal bias in species occurrence data. PLoS Biol 2010;8:e1000385. Greenhalgh T. How to read a paper. 4th ed. Chichester, UK: Wiley Blackwell BMJ Books; 2010. Columbia Accident Investigation Board. Washington, DC: NASA; 2003. http:// spaceflight.nasa.gov/shuttle/archives/sts-107/investigation/CAIB_medres_ full.pdf. Tufte E. PowerPoint does rocket science: assessing the quality and credibility .ofp.technical 162–70.reports, in beautiful evidence. Cheshire, CT: Graphics Press; 2006 Crippen DL, Daniel CC, Donahue AK, Helms SJ, Livingstone M, O’Leary R et al. NASA return to flight task group final report: Annex A.2 individual member observations; 2005. http://www.spaceref.com/news/viewsr.html?pid= 17773. Bagshaw SM, Zappitelli M, Chawla LS. Novel biomarkers of AKI: the challenges of progress ‘Amid the noise and the haste’. Nephrol Dial Transplant 2013;28:235–8. Schmutz J, Wheeler J, Grimwood J, Dickson M, Yang J, Caoile C, et al. Quality assessment of the human genome sequence. Nature 2004;429:365–8. Ioannidis JPA. This I believe in genetics: discovery can be a nuisance, replication is science, implementation matters. Front Genet 2013. http:// dx.doi.org/10.3389/fgene.2013.00033. Klein RJ, Zeiss C, Chew EY, Tsai JY, Sackler RS, Haynes C, et al. Complement factor H polymorphism in age-related macular degeneration. Science 2005;308:385–9. McCarthy JJ, McLeod HL, Ginsburg GS. Genomic medicine: a decade of successes, challenges and opportunities. Sci Transl Med 2013;5:189sr4. International HapMap Consortium. The international HapMap project. Nature 2003;426:789–96. Ioannidis JPA. An epidemic of false claims. Competition and conflicts of interest distort too many medical findings. Sci Am 2011;304:16. Di Iulio J, Rotger M. Pharmacogenomics: what is next? Front Pharmacol 2012. http://dx.doi.org/10.3389/fphar.20111.00086. The ENCODE Project Consortium. An integrated encyclopedia of DNA elements in the human genome. Nature 2012;489:57–74. Maurano MT, Humbert R, Rynes E, Thurman RE, Haugen E, Wang H, et al. Systematic localization of common disease-associated variation in regulatory DNA. Science 2012;337:1190–5. Esteller M. Non-coding RNAs in human disease. Nat Rev Genet 2011;12:861– 74. Manolio TA. Bringing genome-wide association findings into clinical use. Nat Rev Genet 2013;14:549–58. Brunham LR, Hayden MR. Hunting human disease genes: lessons from the past, challenges for the future. Hum Genet 2013;132:603–17.

[135] Marjoram P, Zubair A, Nuzhdin SV. Post-GWAS: where next? More samples, more SNPs or more biology? Heredity 2013. http://dx.doi.org/10.1038/ hdy.2013.52. [136] Lander ES. Initial impact of the sequencing of the human genome. Nature 2011;470:187–97. [137] Zaitlen N, Kraft P, Patterson N, Pasaniuc B, Bhatia G, Pollack S, et al. Using extended genealogy to estimate components of heritability for 23 quantitative and dichotomous traits. PLoS Genet 2013;9:e1003520. [138] Zuk O, Hechter E, Sunyaev SR, Lander ES. The mystery of phantom heritability: genetic interactions create phantom heritability. Proc Natl Acad Sci USA 2012;109:1193–8. [139] Hobbs HH, Leitersdorf E, Leffert CC, Cryer DR, Brown MS, Goldstein JL. Evidence for a dominant gene that suppresses hypercholesterolemia in a family with defective low density lipoprotein receptors. J Clin Invest 1989;84:656–64. [140] Mao J-H, Saunier EF, de Koning JP, McKinnon MM, Higgins MN, Nicklas K, et al. Genetic variants of Tgfb1 act as a context-dependent modifiers of mouse skin tumor susceptibility. Proc Natl Acad Sci USA 2006;103:8125–30. [141] Freimuth J, Clermont FF, Huang X, DeSapio A, Tokuyasu TA, Sheppard D, et al. Epistatic interactions between Tgfb1 and genetic loci, Tgfbm2 and Tgfbm3, determine susceptibility to an asthmatic stimulus. Proc Natl Acad Sci USA 2012;109:18042–47. [142] Gusella JF, Wexler NS, Conneally PM, Naylor SL, Anderson MA, Tanzi RE, et al. A polymorphic DNA marker genetically linked to Huntington’s disease. Nature 1983;306:234–8. [143] MacDonald ME, Ambrose CM, Duyao MP, Myers RH, Lin C, Srinidhin L, et al. A novel gene containing a trinucleotide repeat that is expanded and unstable on Huntington’s disease chromosomes. Cell 1993;72:971–83. [144] Zuccato C, Valenza M, Cattaneo C. Molecular mechanisms and potential therapeutical targets in Huntington’s disease. Physiol Rev 2010;90:905–81. [145] Moffatt MF, Kabesch M, Liang L, Dixon AL, Strachan D, Heath S. et al. Genetic variants regulating ORMDL3 expression contribute to the risk of childhood asthma. Nature 2007;448:470–3. [146] Zhang Y, Moffatt MF, Cookson WOC. Genetic and genomic approaches to asthma: new insights for the origins. Curr Opin Pulm Med 2012;18:6–13. [147] Ioannidis JPA. Genetic prediction for common diseases: will personal genomics ever work? Arch Intern Med 2012;172:744–6. [148] Do CB, Hinds DA, Francke U, Eriksson N. Comparison of family history and SNPs for predicting risk of complex disease. PLoS Genet 2012;8:e1002973. [149] Smith JG, Newton-Cheh C, Almgren P, Melander O, Platonov PG. Genetic polymorphisms for estimating risk of atrial fibrillation in the general population: a prospective study. Arch Intern Med 2012;172:742–4. [150] Tennessen JA, Bigham AW, O’Connor TD, Fu W, Kenny EE, Gravel S, et al. Evolution and functional impact of rare coding variation from deep sequencing of human exomes. Science 2012;337:64–9. [151] Bettens K, Sleegers K, Van Broeckhoven C. Genetic insights in Alzheimer’s disease. Lancet Neurol 2013;12:92–104. [152] Ciarlo E, Massone S, Penna I, Nizzari M, Gigoni A, Dieci G, et al. An intronic ncRNA-dependent regulation of SORL1 expression affecting Ab formation is upregulated in post-mortem Alzheimer’s disease brain samples. Dis Model Mech 2013;6:424–33. [153] Goldstein DB, Allen A, Keebler J, Margulies EH, Petrou S, Petrovski S, et al. Sequencing studies in human genetics: design and interpretation. Nat Rev Genet 2013;14:460–70. [154] Rung J, Brazma A. Reuse of public genome-wide gene expression data. Nat Rev Genet 2013;14:89–99. [155] Williams M. Commentary: genome-based CNS drug discovery: D-amino acid oxidase (DAAO) as a novel target for antipsychotic medications: progress and challenges. Biochem Pharmacol 2009;78:1360–5. [156] Firestein GS, Pisetsky DS. DNA microarrays: boundless technology or bound by technology? Guidelines for studies using microarray technology. Arthritis Rheum 2002;46:859–61. [157] Ioannidis JP, Allison DB, Ball CA, Coulibaly I, Cui X, et al. Repeatability of published microarray gene expression analyses. Nat Genet 2009;41: 149–55. [158] Bell CJ, Dinwiddie DL, Miller NA, Hateley SL, Ganusova EE, Mudge J, et al. Carrier testing for severe childhood recessive diseases by next-generation sequencing. Sci Transl Med 2011;3:65ra4. [159] Mardis ER. Next-generation sequencing platforms. Ann Rev Anal Chem 2013;6:287–303. [160] T. Brown, quoted in Marx V. The genome jigsaw. Nature 2013;501:263–8. [161] Lupski JR. Genome mosaicism – one human, multiple genomes. Science 2013;341:358–9. [162] O’Huallachain M, Karczewski KJ, Weissman SM, Urban AW, Snyder MP. Extensive genetic variation in somatic human tissues. Proc Natl Acad Sci USA 2012;109:18018–23. [163] Ozsolak F, Milos PM. RNA sequencing: advances, challenges and opportunities. Nat Rev Genet 2011;12:87–98. [164] Kapelli K, Yeo GW. Genome-wide approaches to dissect the roles of RNA binding proteins in translational control: implications for neurological diseases. Front Neurosci 2012;6:144. [165] Ugras SE, Shorter J. RNA-binding proteins in amyotrophic lateral sclerosis and neurodegeneration. Neurol Res Int 2012;2012:432780. [166] Polymenidou M, Lagier-Tourenne C, Hutt KR, Huelga SC, Moran J, Liang TY, et al. Long pre-mRNA depletion and RNA missplicing contribute to neuronal vulnerability from loss of TDP-43. Nat Neurosci 2011;14:459–68.

Please cite this article in press as: Mullane K, et al. Translational paradigms in pharmacology and drug discovery. Biochem Pharmacol (2013), http://dx.doi.org/10.1016/j.bcp.2013.10.019

G Model

BCP-11799; No. of Pages 22 K. Mullane et al. / Biochemical Pharmacology xxx (2013) xxx–xxx [167] Richard-Miceli C, Criswell LA. Emerging patterns of genetic overlap across autoimmune disorders. Genome Med 2012;4:6. [168] Parkes M, Cortes A, van Heel DA, Brown MA. Genetic insights into common pathways and complex relationships among immune-mediated diseases. Nat Rev Genet 2013;14:661–73. [169] Li X, Ampleford EJ, Howard TD, Moore WC, Torgerson DG, Li H, et al. Genomewide association studies of asthma indicate opposite immunopathogenesis direction from autoimmune diseases. J Allergy Clin Immunol 2012;130:861–8. [170] Hueber W, Sands BE, Lewitzky S, Vandemeulebroecke M, Reinisch W, Higgins PDR, et al. Secukinumab, a human anti-IL-17A monoclonal antibody, for moderate to severe Crohn’s disease: unexpected results of a randomized, double-blind placebo-controlled trial. Gut 2012;61:1693–770. [171] Gregory AP, Dendrou CA, Attfield KE, Haghikia A, Xifara DK, Butter F, et al. TNF receptor 1 genetic risk mirrors outcome of anti-TNF therapy in multiple sclerosis. Nature 2012;488:508–11. [172] Egger G, Liang G, Aparicio A, Jones PA. Epigenetics in human disease and prospects for epigenetic therapy. Nature 2004;429:457–63. [173] Feil R, Fraga MF. Epigenetics and the environment: emerging patterns and implications. Nat Rev Genet 2012;13:97–109. [174] Ventham NT, Kenndey NA, Nimmo ER, Satsangi J. Beyond gene discovery in inflammatory bowel disease: the emerging role of epigenetics. Gastroenterology 2013;145:293–306. [175] Messias EL, Chen C-Y, Eaton WW. Epidemiology of schizophrenia: review of findings and myths. Psychiatr Clin N Am 2007;30:323–38. [176] Duarte JD. Epigenetics primer: why the clinician should care about epigenetics. Pharmacotherapy 2013. http://dx.doi.org/10.1002/phar.1325. [177] Weichenhan D, Plass C. The evolving epigenome. Hum Mol Genet 2013. http://dx.doi.org/10.1093/hmg/ddt348. [178] Dirkx E, da Costa Martins PA, De Windt LJ. Regulation of fetal gene expression in heart failure. Biochim Biophys Acta 2013. http://dx.doi.org/10.1016/ j.bbadis.2013.07.023. [179] Nagano T, Lubling Y, Stevens TJ, Schoenfelder S, Yaffe E, Dean W, et al. Singlecell Hi-C reveals cell-to-cell variability in chromosome structure. Nature 2013;502:59–64. [180] Estes AC. Chromosomes actually look like an insane ball of Spaghetti; 2013. http://gizmodo.com/chromosomes-actually-look-like-an-insane-ball-ofspaghe-1390765749. [181] Misteli T. Beyond the sequence: cellular organization of genome function. Cell 2007;128:787–800. [182] Misteli T. Protein dynamics: implications for nuclear architecture and gene expression. Science 2001;291:843–7. [183] Cremer T, Cremer C. Chromosome territories, nuclear architecture and gene regulation in mammalian cells. Nat Rev Genet 2001;2:292–301. [184] Reddy KL, Feinberg AP. Higher order chromatin organization in cancer. Semin Cancer Biol 2013;23:109–15. [185] Jones R, Pembrey M, Golding J, Herrick D. The search for genenotype/ phenotype associations and the phenome scan. Paediatr Perinat Epidemiol 2005;19:264–75. [186] Denny JC, Ritchie MD, Basford MA, Pulley JM, Bastarache L, Brown-Gentry K, et al. PheWAS: demonstrating the feasibility of a phenome-wide scan to discover gene-disease associations. Bioinformatics 2010;26:1205–10. [187] Pendergrass SA, Brown-Gentry K, Dudek S, Torstenson ES, Ambite JL, Avery CL, et al. The use of phenome-wide association studies (PheWAS) for exploration of novel genotype-phenotype relationships and pleiotropy discovery. Genet Epidemiol 2011;35:410–22. [188] Hebbring SJ, Schrodi SJ, Ye Z, Zhou Z, Page D, Brilliant MH. A PheWAS approach in studying HLA-DRB1*1501. Genes Immun 2013;14:187–91. [189] Goodsaid FM, Frueh FW, Mattes W. Strategic paths for biomarker qualification. Toxicology 2008;245:219–23. [190] Gashaw I, Ellinghaus P, Sommer A, Asadullah K. What makes a good drug target? Drug Discov Today 2011;16:1037–43. [191] Plenge RM, Scolnick EM, Altshuler D. Validating therapeutic targets through human genetics. Nat Rev Drug Discov 2013;12:581–94. [192] Kopec K, Bozyczko-Coyne DB, Williams M. Target identification and validation in drug discovery: the role of proteomics. Biochem Pharmacol 2005; 69:1133–9. [193] Sorger OK, Schoeberl B. An expanding role for cell biologists in drug discovery and pharmacology. Mol Biol Cell 2012;23:4162–4. [194] Rydzewski RM. Real world drug discovery: a chemist’s guide to biotech and pharmaceutical research. Oxford, UK: Elsevier; 2008. p. 141–3. [195] LaMattina J. BCG weighs in on first-in-class vs. best-in-class drugs – how valuable is their advice? Forbes; 2013. http://www.forbes.com/sites/ johnlamattina/2013/06/17/bcg-weighs-in-on-first-in-class-vs-best-in-classdrugs-how-valuable-is-their-advice/. [196] Wu TY-H, Ding S. Target validation in chemogenomics. In: Metcalf B, Dillon S, editors. Target validation in drug discovery. Burlington, MA: Academic Press; 2007. p. 27–39. [197] Grammel M, Hang HC. Chemical reporters for biological discovery. Nat Chem Biol 2013;9:475–84. [198] Bunnage ME, Chekler ELP, Jones LH. Target validation using chemical probes. Nat Chem Biol 2013;9:195–9. [199] Hopkins PH. Familial hypercholesterolemia-improving treatment and meeting guidelines. Int J Cardiol 2003;89:13–23. [200] Abifadel M, Varret M, Rabe`s J-P, Allard D, Ouguerram K, Devillers M, et al. Mutations in PCSK9 cause autosomal dominant hypercholesterolemia. Nat Genet 2003;34:154–6.

21

[201] Yang Y, Wang Y, Li S, Xu Z, Li H, Ma L, et al. Mutations in SCN9A, encoding a sodium channel alpha subunit, in patients with primary erythermalgia. J Med Genet 2004;41:171–4. [202] Black JW. Pharmacology: analysis and exploration. Brit Med J 1986;293:252– 5. [203] Braff DL. Promises, challenges and caveats of translational research in neuropsychiatry. In: Barrett JE, Coyle JT, Williams M, editors. Translational neuroscience: applications in neurology, psychiatry, and neurodevelopmental disorders. Cambridge, England: Cambridge University Press; 2012 . p. 352–3. [204] Kenakin T. A pharmacology primer: theory, application and methods. 3rd ed. Burlington: Academic Press; 2009. xv. [205] Lee JA, Uhlik MT, Moxham CM, Tomandl D, Sall DJ. Modern phenotypic drug discovery is a viable, neoclassic pharma strategy. J Med Chem 2012;55:4527– 38. [206] Giuliano KA, DeBiasio RL, Dunlay RT, Gough A, Volosky JM, Zock J, et al. Highcontent screening: a new approach to easing key bottlenecks in the drug discovery process. J Biomol Screen 1997;2:249–59. [207] Enna SJ, Williams M. Challenges in the search for drugs to treat central nervous system disorders. J Pharmacol Exp Ther 2009;329:404–11. [208] Braff L, Braff DL. The neuropsychiatric translational revolution: still very early and still very challenging. J Am Med Assoc Psychiatry 2013;70:777–9. [209] Roth BL, Sheffler DJ, Kroeze WK. Magic shotguns versus magic bullets: selectively non-selective drugs for mood disorders and schizophrenia. Nat Rev Drug Discov 2004;3:353–9. [210] Morphy R, Rankovic Z. Designed multiple ligands. An emerging drug discovery paradigm. J Med Chem 2005;48:6523–43. [211] Lochmann van Bennekom MW, Gijsman HJ, Zitman FG. Antipsychotic polypharmacy in psychotic disorders: a critical review of neurobiology, efficacy, tolerability and cost effectiveness. J Psychopharmacol 2013;27:327–36. [212] Genovese MC. Inhibition of p38: has the fat lady sung? Arthritis Rheum 2009;60:317–20. [213] Maes T, Joos GF, Brusselle GG. Targeting interleukin-4 in asthma: lost in translation? Am J Respir Cell Mol Biol 2012;47:261–70. [214] Hill RJ. NK1 (substance P) receptor antagonists -why are they not analgesic in humans? Trends Pharmacol Sci 2000;21:244–6. [215] Chames P, Van Regenmortel M, Weiss E, Baty D. Therapeutic antibodies: successes, limitations and hopes for the future. Br J Pharmacol 2009;157:220–33. [216] Groh WJ, Groh MR, Shen C, Monckton DG, Bodkin CL, Pascuzzi RM. Survival and CTG repeat expansion in adults with myotonic dystrophy type 1. Muscle Nerve 2011;43:648–51. [217] Todd PK, Oh SY, Krans A, He F, Sellier C, Frazer M, et al. CGG repeat-associated translation mediates neurodegeneration in fragile X tremor ataxia syndrome. Neuron 2013;78:440–55. [218] Jung J, van Jaarsveld MTM, Shieh S-Y, Xu K, Bonini NM. Defining genetic factors that modulate intergenerational CAG repeat instability in Drosophila melanogaster. Genetics 2011;187:61–71. [219] Choi W-Y, Gemberling M, Wang J, Holdway JE, Shen M-C, Karlstrom RO, et al. In vivo monitoring of cardiomyocytes proliferation to identify chemical modifiers of heart regeneration. Development 2013;140:660–6. [220] Li JW-H, Vederas JC. Drug discovery and natural products: end of an era or an endless frontier? Science 2009;325:161–5. [221] Yu F, Takahashi T, Moriya J, Kawaura K, Yamakawa J, Kusaka K, et al. Traditional Chinese medicine and Kampo: a review from the distant past for the future. J Int Med Res 2006;34:231–9. [222] Verpoorte R, Crommelin D, Danhof M, Gilissen LJWJ, Schuitmaker H, van der Greef J, et al. Commentary: ‘‘a systems view on the future of medicine: inspiration from Chinese medicine? J Ethnopharmacol 2009;121:479–81. [223] van der Greef J, van Wietmarschen H, Schroro J, Wang M, Hankemeier T, Xu G. Systems biology-based diagnostic principles as pillars of the bridge between Chinese and western medicine. Planta Med 2010;76:2036–47. [224] Newman DJ, Cragg GM. Natural products as sources of new drugs over the 30 years from 1981 to 2010. J Nat Prod 2012;75:311–35. [225] Bent S, Kane C, Shinohara K, Neuhaus J, Hudes ES, Goldberg H, et al. Saw palmetto for benign prostatic hyperplasia. N Engl J Med 2006;354: 557–66. [226] Randløv C, Mehlsen J, Thomsen CF, Hedman C, von Fircks H, Wintherk. The efficacy of St. John’s Wort in patients with minor depressive symptoms or dysthymia – a double-blind placebo-controlled study. Phytomedicine 2006;13:215–21. [227] Howells LM, Moiseeva EP, Neal CP, Foreman BE, Andreadi CK, Sun Y-Y, et al. Predicting the physiological relevance of in vitro cancer preventive activities of phytochemicals. Acta Pharmacol Sin 2007;28:1274–304. [228] Aggarwal BB, Sung B. Pharmacological basis for the role of curcumin in chronic diseases: an age-old spice with modern targets. Trends Pharmacol Sci 2009;30:85–94. [229] Lipinski CA, Lombardo F, Dominy BW, Feeney PJ. Experimental and computational approaches to estimate solubility and permeability in drug discovery and development settings. Adv Drug Deliv Rev 2001;46:3–26. [230] Butler MS. The role of natural product chemistry in drug discovery. J Nat Prod 2004;64:2141–53. [231] Clardy J, Fischbach MA, Walsh CT. New antibiotics from bacterial natural products. Nat Biotechnol 2006;24:1541–50. [232] Henrich CJ, Beutler JA. Matching the power of high throughput screening to the chemical diversity of natural products. Nat Prod Rep 2013;30:1284–98.

Please cite this article in press as: Mullane K, et al. Translational paradigms in pharmacology and drug discovery. Biochem Pharmacol (2013), http://dx.doi.org/10.1016/j.bcp.2013.10.019

G Model

BCP-11799; No. of Pages 22 22

K. Mullane et al. / Biochemical Pharmacology xxx (2013) xxx–xxx

[233] Pound P, Ebrahim S, Sandercock P, Bracken MB, Roberts I. Where is the evidence that animal research benefits humans? Br Med J 2004;328:514–7. [234] Bracken MB. Why animal studies are often poor predictors of human reactions to exposure. J R Soc Med 2009;102:120–2. [235] van der Worp HB, Howells DW, Sena ES, Porritt MJ, Rewell S, O’Collins V, et al. Can animal models of disease reliably inform human studies? PLoS Med 2010;7:e1000245. [236] Webb DR. Animal models of human disease: inflammation. Biochem Pharm 2014 [in this issue]. [237] Mullane K, Williams M. Animal models of asthma: reprise or reboot? Biochem Pharm 2014 [in this issue]. [238] Ruggeri BA, Camp F, Miknyoczki S. Animal models of disease: pre-clinical animal models of cancer and their applications and utility in drug discovery. Biochem Pharm 2014 [in this issue]. [239] McGonigle P. Animal models of CNS disorders. Biochem Pharm 2014 [in this issue]. [240] Higgins J, Cartwright ME, Templeton AC. Progressing preclinical drug candidates: strategies on preclinical safety studies and the quest for adequate exposure. Drug Discov Today 2012;17:828–36. [241] Gabrielsson J, Dolgos H, Gillberg P-G, Bredberg U, Benthem B, Duker G. Early integration of pharmacokinetic and dynamic reasoning is essential for optimal development of lead compounds: strategic considerations. Drug Disc Today 2009;14:358–72. [242] Bueters T, Ploeger BA, Visser SAG. The virtue of translational PKPD modeling in drug discovery: selecting the right clinical candidate while sparing animal lives. Drug Discov Today 2013;8:853–62. [243] Goineau S, Lemaire M, Froget G. Overview of safety pharmacology. Curr Protoc Pharmacol 2013;10.1.1–1.8. [244] Perel P, Roberts I, Sena E, Wheble P, Briscoe C, Sandercock P, et al. Comparison of treatment effects between animal experiments and clinical trials: systematic review. Br Med J 2007;334:197. [245] Van der Worp HB, Van Gijn J. Clinical practice. Acute ischemic stroke. N Engl J Med 2007;57:572–9. [246] O’Collins VE, Macleod MR, Donnan GA, Horky LL, van der Worp BH, Howells DW. 1,026 experimental treatments in acute stroke. Ann Neurol 2006;59:467–77. [247] Macleod MR, van der Worp HB, Sena ES, Howells DW, Dirnagl U, Dirnagl U, et al. Evidence for the efficacy of NXY-059 in experimental focal cerebral ischaemia is confounded by study quality. Stroke 2008;39:2824–9. [248] Macleod MR, Fisher M, O’Collins V, Sena ES, Dirnagl U, Bath PMW, et al. Reprint: good laboratory practice: preventing introduction of bias at the bench. J Cerebral Blood Flow Metab 2009;29:221–3. [249] Bath PMW, Gray LJ, Bath AJG, Buchan A, Miyata T, Green AR, et al. Effects of NCY-059 in experimental stroke: an individual animal meta-analysis. Br J Pharmacol 2009;157:1157–71. [250] Sena ES, van der Worp HB, Bath PMW, Howells DW, Macleod MR. Publication bias in reports of animal stroke studies leads to major overstatement of efficacy. PLoS Biol 2010;8:e1000344. [251] von Herrath M, Nepom GT. Animal models of human type 1 diabetes. Nat Immunol 2009;10:129–32. [252] Jucker M. The benefits and limitations of animal models for translational research in neurodegenerative diseases. Nat Med 2010;16:1210–4. [253] Seok J, Warren HS, Cuenca AG, Mindrinos MN, Baker HV, Xu W, et al. Genomic responses in mouse models poorly mimic human inflammatory diseases. Proc Natl Acad Sci USA 2013;110:3507–12. [254] Williams M, Enna SJ. Defining the role of pharmacology in the emerging world of translational research. Adv Pharmacol 2009;57:1–30. [255] Le Fanu J. The rise and fall of modern medicine. 2nd ed. London: Abacaus; 2011. p. 300–6. [256] Klein DF. The loss of serendipity in psychopharmacology. J Am Med Assoc 2008;299:1063–5. [257] Ioannidis JP. Extrapolating from animals to humans. Sci Transl Med 2012;4:151ps115. [258] Van der Worp HB, Howells DW, Sena ES, Porritt MJ, Rewell S, O’Collins V, et al. Can animal studies of disease reliably inform human studies? PLoS Med 2010;7:e1000245. [259] Landis SC, Amara SG, Asadullah K, Austin CP, Blumenstein R, Bradley EW, et al. A call for transparent reporting to optimize the predictive value of preclinical research. Nature 2012;490:187–91. [260] Hooijmans CR, Ritskes-Hoitinga M. Progress in using systematic reviews of animal studies to improve translational research. PLoS Med 2013;10: e1001482. [261] Heng HH. HeLa genome versus donor’s genome. Nature 2013;501:167. [262] Gillet JP, Calcagno AM, Varma S, Marino M, Green LJ, Vora MI, et al. Redefining the relevance of established cancer cell lines to the study of mechanisms of clinical anti-cancer drug resistance. Proc Natl Acad Sci USA 2011;108:18708– 13. [263] Yamanaka S. Induced pluripotent stem cells: past, present, and future. Cell Stem Cell 2012;10:678–88. [264] Merkle FT, Eggan K. Modeling human disease with pluripotent stem cells: from genome association to function. Cell Stem Cell 2013;12:656–68. [265] Bellin M, Marchetto MC, Gage FH, Mummery C. Induced pluripotent stem cells: the new patient? Nat Rev Mol Cell Biol 2012;13:713–26. [266] Puri MC, Nagy A. Concise review: embryonic stem cells versus induced pluripotent stem cells: the game is on. Stem Cells 2012;30:10–4.

[267] Ding Q, Lee Y-K, Schaefer EAK, Peters DT, Veres A, et al. A TALEN genomeediting system for generating human stem cell-based disease models. Cell Stem Cell 2013;12:238–51. [268] Walsh RM, Hochedlinger K. A variant CRISPR-Cas9 system adds versatility to genome engineering. Proc Natl Acad Sci USA 2013;110:15514–15. [269 Kim C, Wong J, Wen J, Wang S, Wang C, Spiering S, et al. Studying arrhythmogenic right ventricular dysplasia with patient-specific iPSCs. Nature 2001;494:105–10. [270] Lahti AL, Kujala VJ, Chapman H, Koivisto A-P, Pekkanen-Mattila M, Kirkela E, et al. Model for long QT syndrome type 2 using human iPS cells demonstrates arrhythmogenic characteristics in cell culture. Dis Models Mech 2012;5:220– 30. [271] Chi KR. Revolution dawning in cardiotoxicity testing. Nat Rev Dug Discov 2013;12:565–7. [272] Kaye JA, Finkbeiner S. Modeling Huntington’s disease with induced pluripotent stem cells. Mol Cell Neurosci 2013;56:50–64. [273] Drouin-Ouellet J, Barker RA. Parkinson’s disease in a dish: what patient specific-reprogrammed somatic cells can tell us about Parkinson’s disease, if anything? Stem Cells Int 2012;2012:926147. [274] Reinhardt P, Schmid B, Burbulla LF, Scho¨ndorf DC, Wagner L, Glatza M, et al. Genetic correction of a LRRK2 mutation in human iPSCs links parkinsonian neurodegeneration to ERK-dependent changes in gene expression. Cell Stem Cell 2013;12:354–67. [275] Lee G, Ramirez CN, Kim H, Zeltner N, Liu B, Radu C. Large-scale screening using familial dysautonomia induced pluripotent stem cells identifies compounds that rescue IKBKAP expression. Nat Biotechnol 2012;30:1244–8. [276] Flood DG, Marek GJ, Williams M. Developing predictive CSF biomarkers -a challenge critical to success in Alzheimer’s disease and neuropsychiatric translational medicine. Biochem Pharmacol 2011;81:1422–34. [277] Talpos J. Steckler touching on translation. Cell Tissue Res 2013. http:// dx.doi.org/10.1007/s00441-013-1694-7. [278] Wehling M. Translational medicine: can it really facilitate the transition of research ‘from bench to bedside’? Eur J Clin Pharmacol 2006;62:91–5. [279] Wehling M. Drug development in the light of translational science: shine or shade? Drug Discov Today 2011;16:1076–83. [280] Wehling M. Assessing the translatability of drug projects: what needs to be scored to predict success? Nat Rev Drug Discov 2009;8:541–6. [281] LoRusso PL. Phase 0 clinical trials: an answer to drug development stagnation? J Clin Oncol 2009;27:2586–8. [282] Zetterberg Z, Mattsson N, Blennow K, Olsson B. Use of theragnostic markers to select drugs for phase II/III trials for Alzheimer disease. Alzheimer’s Res Ther 2010;2:32. [283] De Meyer G, Shapiro F, Vanderstichele H, Vanmechelen E, Engelborghs S, De Deyn PP, et al. Diagnosis-independent Alzheimer disease biomarker signature in cognitively normal elderly people. Arch Neurol 2010;67:949–56. [284] CMS. Decision memo for beta amyloid positron emission tomography in dementia and neurodegenerative disease (CAG-00431N); 2013. http:// www.cms.gov/medicare-coverage-database/details/nca-decision-memo. aspx?NCAId=265&utm_medium=email&utm_source=govdelivery. [285] Wendler A, Wehling M. Translatability scoring in drug development: eight case studies. J Transl Med 2012;10:39. [286] Fe´liste R, Delebasse´e D, Simon MF, Chap H, Defreyn G, Valle´e E, et al. Broad spectrum anti-platelet activity of ticlopidine and PCR 4099 involves the suppression of the effects of released ADP. Thromb Res 1987;48:403–15. [287] Fitzgerald DJ, FitzGerald GA. Historical lessons in translational medicine: cyclooxygenase inhibition and P2Y12. Antagonism Circ Res 2013;112:174–94. [288] Hollopeter G, Jantzen HM, Vincent D, Li G, England L, Ramakrishnan V, et al. Identification of the platelet ADP receptor targeted by antithrombotic drugs. Nature 2001;409:202–7. [289] Shuaib A, Lees KR, Lyden P, Grotta J, Davalos A, Davis SM, et al. NXY-059 for the treatment of acute ischemic stroke. N Engl J Med 2007;357:562–71. [290] Diener HC, Lees KR, Lyden P, Grotta J, Davalos A, Davis SM, et al. NXY-059 for the treatment of stroke: pooled analysis of the SAINT I and II trials. Stroke 2008;39:1571–8. [291] van de Steeg E, Kleemann R, Jansen HT, van Duyvenvoorde W, Offerman EH, Wortelboer HM, et al. Combined analysis of pharmacokinetic and efficacy data of preclincal studies with statins markedly improves translation of drug efficacy to human trials. J Pharmacol Exp Ther 2013. http://dx.doi.org/ 10.1124/jpet.113.208595. [292] Goodman L. Norbeck T. Who’s to blame for our rising healthcare costs? Forbes; 2013. http://www.forbes.com/sites/physiciansfoundation/2013/10/ 03/whos-to-blame-for-our-rising-healthcare-costs/. [293] Booth B. Academic bias & biotech failures Life SciVC; 2011. http://lifescivc. com/2011/03/academic-bias-biotech-failures/. [294] Uitdehaag JCM. The seven types of drug discovery waste: toward a new lean for the drug industry. Drug Discov Today 2011;16:369–71. [295] Germann PG, Schuhmacher A, Harrison J, Law R, Haug K, Wong G. How to create innovation by building the translation bridge from basic research into medicinal drugs: an industrial perspective. Hum Genome 2013;7:5. [296] Fang FC, Casadevall A. Lost in translation – basic science in the era of translational research. Infect Immun 2010;78:563–6. [297] Casadevall A, Fang FC. Reforming science: methodological and cultural reforms. Infect Immun 2012;80:891–6. [298] Fang FC, Casadevall A. Reforming science: structural reforms. Infect Immun 2012;80:897–901.

Please cite this article in press as: Mullane K, et al. Translational paradigms in pharmacology and drug discovery. Biochem Pharmacol (2013), http://dx.doi.org/10.1016/j.bcp.2013.10.019

Translational paradigms in pharmacology and drug discovery.

The translational sciences represent the core element in enabling and utilizing the output from the biomedical sciences and to improving drug discover...
986KB Sizes 0 Downloads 0 Views