Generative AI in Pharma: How Do You Prepare the Wider Organisation to use Generative AI to get a Real Value-Add?

2024-01-19 |  Alice Nyborg & Philip Winkworth

No matter how organisations feel about generative AI tools, the technology is truly transformative and is here to stay. AI solutions are the top emerging technology planned for deployment across enterprises and we need to ensure that the tools are used effectively in a customer-centric way – both for internal and external customers.

As the industry continues to evolve, the potential of generative AI to revolutionise processes and outcomes cannot be overlooked. These foundation models offer significant advantages in pharma, such as improving the drug discovery process, rapid content and idea generation, improved insights from textual data, improved efficiency (by automated long and repetitive tasks) and improved personalisation. Noteworthy examples include Bayer planning to use Gen AI tools to streamline the clinical trial process by enhancing analysis of extensive data sets 1.  Similarly, Janssen intends to use Gen AI coding tools to speed up development cycles and testing of custom marketing tools 2.

Late in 2023, we laid out how humans must remain at the heart of AI in healthcare 3. Building on this approach, implementing AI at a large scale requires more than a surface-level understanding. It demands the preparedness of the entire organisation to harness the true value-add that generative AI brings to the table. There are 3 fundamental questions that leadership should answer when formulating a strong strategy for generative AI within its organisation:

  • Do you know and trust the information generated? One of the major challenges with these tools is the potential for inaccuracy and inconsistency of information. The tools are also subject to ‘hallucinations’, meaning the generation of misinformation appearing highly convincing and potentially generating false references and citations, which limits the reliability of the prompts, hence AI is only as good as the person using it. As generative AI relies on neural networks with billions of parameters, it makes it difficult for the tool to attribute the information to facts. If not thoroughly reviewed, this information has the potential to misguide analysis and conclusions, if inaccuracies are missed and shared broadly, it can be difficult to convince people of the opposite or be devastating for a strategy. The novelty of these tools also means that they may have unknown capabilities that can pose serious threat. Finally, unchecked algorithms or data that the model is trained on can result in biased and discriminatory outcomes, including algorithmic fairness bias and historical biases which may possibly perpetuate systemic inequities.
  • Are you complying with regulatory requirements? In a heavily regulated industry with sensitive patient data, privacy concerns should be top-of-mind of pharma companies. In some cases, the third-party providers (AI tools) will also have the right to use and/or disclose the inputs. There are also data leakage concerns regarding non-anonymization and collection of employee data. There is currently a push in the open-source community to declare what data the model was trained on, getting permission to use underlying data sets for clearly specified purposes and having clear traceability of model inputs and outputs. Important conversations about integrity, provenance and quality. Moving forward, future-proofing algorithms will be critical, as we have seen that legislation can be retroactively applied (c.f. data privacy and GDPR), so ensuring the right permissions when gathering training data / data sets will protect against potential pitfalls.
  • Are you owning the value you create? The novelty of AI has caused IP & copyright ownership concerns regarding both inputs and outputs. Most governments still have ambiguous positions on the ownership of AI-generated work / content; however, we are seeing the first wave of generative AI IP litigation in the US 4. These decisions have the potential to shape the legal landscape of AI. In this instance, the procurement side of things will increase in importance with consideration of contract wording needing to be used to establish future ownership of value upfront.

Preparing the Wider Organisation for Generative AI

While the increased adoption of AI is beneficial, complications can arise if organisations lack adequate governance frameworks. By applying the 3 following principles, organisations can set themselves up correctly to maximise the value of Gen AI:

  1. Being Clear about the Tool, the Use Cases and the Target Audience – A crucial initial step involves defining the tool(s) used, recognising its potential applications and defining who will use the tool(s). It is important to be clear about what data is allowed to be inputted into the tools. Equally important is the establishment of success metrics and defining how to measure them.
  2. Building a Strong Governance Framework to address privacy and security concerns – Effective use of generative AI requires a clear delineation of responsibilities and accountability. Organisations should therefore create a robust governance framework where all relevant stakeholders (including legal, compliance, C-suite, board and HR) are involved in the decision-making process. As a minimum, the framework should seek to address the following:
    • Creating well-defined roles & responsibilities to set up for success - Establishing well-defined roles and responsibilities is crucial for the successful implementation of generative AI. Organisations should strongly consider the creation of a Chief AI Officer (CAIO) role to ensure relevant governance and ensure that they have the right talent to successfully implement and effectively utilise the technology.
    • Ensuring the right people can deliver on the responsibilities – When implementing Gen AI, having talent with a combination of background and skills in pharma and AI will be imperative. The effective deployment of generative AI hinges on the ability of these professionals to navigate complex regulatory frameworks, address ethical considerations, and interpret the extensive datasets.
    • Making sure you have diversity of thought - Organisations should also consider the diversity (including racial, ethnic, socioeconomic) within their AI development teams to limit omission of perspectives and experiences which can result in biased outcomes. This new structure will embed the correct ways of working and mentality / approach so that should issues arise, the correct compliance adjustments are made. Furthermore, including bias-identifying frameworks with effective measures for pre- and post-deployment should minimise any unequitable outcomes.
  3. Implementing the Change within the Organisation – Embracing generative AI involves a cultural shift – allowing for a mindset that embraces adaptation and innovation. Importantly, organisations should ensure that they have the right skill development to welcome these tools. This includes prompting skills, which is knowledge around the data sets being used, the algorithmic functionality and being able to iterate the tool for intended use. Organisations with heavy use of generative AI should consider creating a dedicated role for this. Another key skill set required with these tools is information scepticism, meaning knowledge around technological limitations of the tools that help validate the outputs generated to ensure that they are appropriate for follow-on use / users. This could also be a dedicated role / team / centre of excellence function in the organisation, however, ideally would be separate from the prompting skill as there may be conflicting objectives and goals. These professionals should be able to communicate the outputs of using AI in an accessible way to ensure the whole organization is up-to-date and remove the fear of the unknown. Finally, ensuring frequent and effective internal comms on the roll-out of any Gen AI use cases can help this shift and ensure you stay true to your ethos and ‘North Star’. 

Future Outlook

Rapid and extensive adoption of generative AI has many benefits, yet the speed of deployment means that organisations need to have elevated their strategic oversight and governance to ensure responsible usage and effective risk mitigation. With a secure framework in place, pharma can confidently leverage the benefits of generative AI, all while remaining firmly committed to an ethical patient-centric approach.

If you would like to discuss any of the points raised here or need an independent view on your Gen AI strategy, please get in touch.

About the authors

Phil Winkworth, PhD, is a Senior Manager in PEN / Wavestone’s Life Sciences team transforming the LS sector through Digital Strategy, Digital Health, R&D, Tech and Commercial Strategy, he strives to make Pharma and Healthcare provision more equitable for patients across the value chain.

LinkedIn: https://www.linkedin.com/in/philip-winkworth-121a6716/

Email: philip.winkworth@wavestone.com

Alice Nyborg is a Consultant in the Life Sciences team, working across various specialisms. She is dedicated to helping pharmaceutical companies innovate to deliver improved value for patients and healthcare providers.

LinkedIn: https://www.linkedin.com/in/alicenyborg/

Email: alice.nyborg@wavestone.com