In October 2019, the Planisware team attended the Cambridge Healthtech Institute’s (CHI) Executing Decision Making conference in Philadelphia, aptly themed: Portfolio Management: Aligning Portfolio and Productivity with Corporate Strategy to Drive Innovation and Value.
For those of who you don’t live and breathe the world of biotech every day, this may sound like a lot of ground to cover over the course of a two-day conference. (And, indeed, it was!) Take a moment to read the conference overview for a bit more context:
"In an environment of declining revenues and uncertain commercial success, biopharma, device and R&D companies need to continuously evaluate their portfolios. Manufacturers must make tough decisions about which products and projects to pursue to optimize long-term revenue and reduce overall risk. Cambridge Healthtech Institute’s and the BioPharma Strategy Series’ 14th Annual Portfolio Management: Aligning Portfolio and Productivity with Corporate Strategy to Drive Innovation attracts over 100 senior R&D executives from the pharma, biotech, device, IT, public and governmental communities who share best practices in project and portfolio management, R&D innovation, decision analysis, change management, forecasting, and the improvement of operational models. Effective portfolio management requires the alignment of portfolio and productivity with corporate strategy, supporting agile development and adaptation of data into planning."
As you can imagine, we were most interested in learning about the latest project portfolio management (PPM) trends facing bio-pharmaceutical organizations today. In fact, one of the most anticipated conference topics, one that’s very near and dear to our hearts, was about the implementation of artificial intelligence (AI) for resource and cost estimation — and its impact on the entire new drug development process: from clinical studies to new product delivery.
Presenters oftentimes discussed their methodologies, many leveraging Planisware’s PPM tools, and shared key learnings. Three key themes kept cropping up across all of the presentations:
1. High quality data is critical for building accurate models
The importance of data was a recurring theme throughout this conference, just as much as it has become a key focus area across virtually all industries today. The general assumption being: the more high quality historical data you can tap into, the more accurate your model analysis will be. We’ve typically seen that organizations having access to five or more years of high quality metrics, for example, are more likely to have a clear advantage in identifying trends that, in turn, can be optimized for greater productivity, (cost and time) efficiency, and future product development success. There is one caveat, however: just because businesses possess data doesn’t necessarily mean that it’s high quality, which will inevitably have a huge impact on the relative accuracy of the models constructed.
2. Business needs must be optimized against data
There can sometimes be a fairly significant difference between internal (read: human-created) estimates around resourcing and budgetary needs versus those generated by the AI-driven, data-based models mentioned above. What tends to happen in these scenarios is that businesses will, more often than not, overestimate their needs at the beginning of the fiscal or calendar year but then either forget to update them as the year progresses or fail to cross-reference them against the AI-produced estimates for accuracy. This needs to change. Businesses must continually review, reevaluate, and optimize their needs over time — especially leveraging the quality data available to them — and work hard to close the gap between human estimates and more real-time (thanks to data) resource and budget needs. Making this a regular and ongoing best practice will allow them to validate their data-driven models and make more accurate predictions in the future across various project types.
3. AI implementation is still a work-in-progress
Although AI has begun playing a much bigger role in PPM processes for pharmaceutical companies and biotech firms, there are still a number of improvements that need to be made. For example, many businesses have over-hyped the promise of AI and still rely heavily on refined algorithms — that they all too often confuse with real AI — to drive their PPM processes. However, as investment in AI continues to grow across the industry, albeit slowly, PPM processes will eventually evolve and expand beyond the purely “operational” to play an increasingly important role in the end-to-end drug development life cycle: drug design, inventory deployment, disease progression, treatment result predictions, and more.
As an offshoot to these discussions, we were also interested in understanding how pharmaceutical companies and biotech firms both make decisions and measure the quality of those decisions. At a high level, there seem to be two primary challenges facing the industry:
- Personalization: There is no decision-making one-size-fits-all. Every business is different and, thus, its day-to-day processes and approaches to decision-making will vary as well. Overall organizational structure — including the number of different office locations a business has — will impact how businesses make and apply decisions. What has become clear, however, is that while there are some general industry-wide PPM best practices to follow, each organization must create a decision-making framework, tailored specifically to both its business’s structure and timeline constraints, in order to be successful. The obvious challenge: this can be a difficult and time-consuming task given all of the factors that must be considered when building a personalized decision-making model.
- Implementation: The introduction of new decision-making processes means organizations, from the top down, must adapt to change. This isn’t always easy. This requires businesses to put safeguards in place — like dedicated support teams, executive sponsors, and process accountability — to ensure that new processes are integrated smoothly and seamlessly into an organization’s workflow. Additionally, it’s important to begin measuring success immediately following implementation, as this will both help refine and optimize new processes over time and make it easier for organizations to define what constitutes a “good” decision. We actually heard on several occasions that businesses have even started to use qualitative “scorecards,” in addition to quantitative data, to assess these best practices and operational standards in an ongoing way.
So, why is this important?
The key learnings above are only a narrow sampling of what was discussed at the CHI Executive Decision Making conference in Philadelphia, PA. Other topics addressed ranged from scenario analytics to early stage research project management to cross-industry decision analysis (and the list goes on). There were a lot of learnings to be gleaned, for both the biotech and non-biotech worlds alike, underscoring yet again the importance of implementing strong PPM processes for driving productivity, efficiency, and overall business success. However, there was still one thing resoundingly clear: it’s that process makes perfect.