Technologies, such as Artificial Intelligence (AI),can potentially transform many aspects of healthcare. For instance, AI is increasingly used to enhance clinical diagnosis by developing algorithms based on the analysis of millions of data points drawn from medical images, symptoms and biomarkers drawn from electronic medical records.AI is also drawing from large amounts of electronic data to help predict future diseases and optimize and customize treatment regimens. From oncology to infectious diseases AI has already been used, and further uses of language learning models are constantly being explored.
AI also poses other serious threats. These relate to the use of AI to collect and analyze massive amounts of personal data to profile and target people without their knowledge. While such data can be used to improve disease surveillance, it can also lead to the erosion of privacy and the misuse of personal data by a handful of Big Tech companies for commercial purposes. It is therefore essentialthat policymakers and the public consider and debate the appropriate use of AI in healthcare, who determines this and how AI is used in an equitable manner and reaches those in desperate need of improved healthcare.Beyond healthcare, there are other risks as well. With the rapidly evolving technology and difficulty of separating reality with deepfakes, unbridled implementation of AI could lead to the breakdown of trust and deepen social unrest and strife, resulting in public health impacts. Recognizing this, the UN Secretary-General Antonio Guterres (UNSG) has called for transparency, accountability and oversight ofAI.
In addition, AI has the potential to develop a new generation of lethal and autonomous weapons which can result in unimaginable public health harm. Indeed, UNSG spoke of these risks in his address to UN Security Council by cautioning against “horrific levels of death and destruction” that malicious use of AI could potentially cause.
While Big Tech companies are presenting themselves as “saviors of algorithmic harm and not perpetuators,” a key question arises: how we determine safety, beyond what some have called “safety-washing” without necessary commitments, practices and accountability. Safety, as writers of the recent editorialin Science Magazine pointed out is not just about “alignment of values” or avoiding harm, but is about “understanding and mitigating risks to those values” and ensuring that technology does not get used in the “pursue of power and profit at the expense of human rights”. This is key, if AI is ever to be seen as a force of good in healthcare and beyond. However, to do so, technological fixes are simply not enough. To begin with this requires a deeper understanding of political economy of AI and its application.
The future outcomes of the development of AI will depend on policy decisions taken now and on the effectiveness of regulatory institutions to minimize risk and harm and maximize benefit. Crucially, as with other technologies, preventing or minimizing the threats posed by AI will require international agreement and cooperation.