For several years, blockbuster drugs have been the major source of revenue for the pharmaceutical industry, especially Big Pharma. However, several of them are going off patent in the next few years. In an industry that depends heavily on the market monopolies by intellectual property, the patent expirations are estimated to lead to sales losses of $290 billion between now and 2018 (Fierce Pharma). Additionally, a 2003 Bain & Co. report predicts that the average cost of developing, marketing and launching a drug to be as high as $1.7 billion, with each drug taking a minimum of ten years from development to launch. Varied regulatory and legal restrictions worldwide make new drug approvals an even more arduous process; only 3 in every 20 drugs get approved in the United States with most failing in Phase II clinical trials and only 1 out of every 3 approved drugs make significant profits. With no true ‘blockbuster’ in the pipeline and the increasingly rigid regulatory environment, there is an ever growing need to reduce the time and cost of drug development.
In this atmosphere, there are several distinct strategies that can be observed. The entire pharmaceutical industry is heading towards consolidation, with several major companies merging to form even bigger conglomerates, like the Pfizer-Wyeth and Merck-Schering Plough mergers, in a bid to expand product lines and increase market footprint. Big Pharma has also ‘outsourced’ R&D risk to smaller companies, opting to buy out companies with established technologies rather than investing in R & D from scratch. However, there is yet another strategy that offers a safer avenue for risk reduction in the drug discovery field.
McKinsey’s August 2010 quarterly report speaks predicts that big data will be the most crucial competencies for technology intensive companies. In the pharmaceutical industry, it is especially relevant for formulating R & D strategies for expanding and pruning product portfolios. Big data analysis is currently seen in the form of combinatorial chemistry and high throughput screening. With the rise of the ‘omics’ (genomics, proteomics, metabolomics, etc.), there are several terrabytes of biological data that are being generated every year. There is also a surge in electronic health records. Mining and analyzing this data for correlations and patterns can greatly help in identifying new disease areas for focusing research and screening potential drug candidates in silico for those specific disease states and population sets. This will help weed out bad drug candidates before synthesis, greatly reducing initial R & D investment and increasing the probability of returns on the investments that are made. Further intelligent mining can be used to design smart clinical trials with achievable end points and exclusion/inclusion criteria so that clinical trial failure rates fall.
Big data analysis therefore has significant potential in improving the risk-return of the pharmaceutical industry. What strategies do you think will be most effective in the pharmaceutical industry? Do you think big data will play a role?
"Clouds, big data, and smart assets: Ten tech-enabled business trends to watch" McKinsey Quarterly (2010)