Warm greetings from New York. Many thanks for inviting me to speak with you today on the growing importance of international AI governance in a post-COVID world. The potential for how artificial intelligence will change our lives is limitless, and still poorly understood.
AI technologies are being used in everything from commercial services to public services, in areas as diverse as education, health care, infrastructure, dating apps, and much more. We are moving towards inhabiting ever smarter cities where AI will utilize public data to identify the most efficient distribution of resources relating to transportation services, utilities and waste management. Our private lives are also increasingly influenced by AI applications - the onset of the COVID-19 pandemic has only accelerated and accentuated the use of AI. AI has been instrumental in tracking the disease, predicting its evolution, advancing diagnostic treatments and vaccine research development.
As 2020 comes to a close, the most pronounced proliferation of the use of AI is probably in the health care sector. Accenture has predicted by 2021, the applications of AI and health care are expected to grow by no less than 40%.
The pandemic has also heightened our dependence on the predictive capacities of AI systems. On the one hand, this has allowed us to better understand and cope with the evolution of COVID as well as with climate change and other, threatening global phenomena. On the other hand, this predictive capacity has also proven unreliable in unexpected circumstances, making us vulnerable to systemic errors. This has been salient in e-commerce predictions, which were unable to anticipate human behavior during the pandemic, which led to significant disruption to global supply and demand. AI’s predictive capacity can also be used to manipulate consumers and voters to fuel polarization. As we saw with the Cambridge Analytica case, and has been documented in the Netflix documentary, the Social Dilemma.
The limitations and dangers of AI are also becoming clearer. Malicious actors can use AI for ever more sophisticated cyber attacks, while lethal autonomous weapons can be used for crimes with no easily traceable perpetrator. AI can be used for mass surveillance, whether for reasons of political suppression, or for commercial exploitation. Harm can also come from unintentional misuse, like discriminatory algorithms based on biased data that amplify unequal access to jobs, to justice or to finance.
All this shows how we have yet to fully understand the societal implications of the ever expanding use of artificial intelligence.
We do know, sometimes too late, that this is subject to and can amplify the inherent biases and preferences of its developers, sponsors, and users. Our ability to maximize the benefits of AI while curtailing potential risks is harmed by the growing fragmentation in the digital space. We are seeing geopolitical fault line between major powers with technology emerging as the new battleground. Super power rivalry and frictions are made worse by the deepening digital divide between the North and the developing South.
3.6 billion people, mostly in developing countries, remain unconnected to the internet and for them, the benefits of AI are a distant dream. The global South is lagging far behind in patents, intellectual property and expertise relating to AI. The others are dependent on tools and expertise from more developed countries, as well as vulnerable to data exploitation practices that prevent their ownership over their own data or even visibility of how their data is used. To address this complicated, challenging global landscape, we urgently need global leadership and global multi-stakeholder cooperation at the highest levels. No single country or company can design comprehensive and anticipatory guidelines to manage the rise of AI and it’s ripple effect around the globe. We must come together to create an international viable cooperation and governance framework for AI.
However, there are a number of challenges related to realizing such an objective, the first is the digital divide I spoke of before. Developed countries with extensive, high speed broadband networks, are rapidly adopting AI applications, far out pacing the rate of which it’s happening in developing countries.
Secondly, many existing initiatives on artificial intelligence lack any representation and engagement from the global south. Especially when AI has the potential to significantly impact and benefit developing countries, an international cooperation model must address how the say on AI can be made much more inclusive.
The third challenge is one of big data, the fuel for AI. Any standardization of data sharing must account for challenges of inaccurate and incomplete data in developing countries without excluding them. Moreover, we need to ensure greater diversity and data sets to help prevent social and cultural biases from being perpetuated and amplified by AI systems. We also need international principles that underpin how citizens' data is utilized, stored, and shared, so as to protect fundamental human rights, like the right to privacy.
All of this means we need flexible and innovative forms of global AI cooperation and governance that prioritise the responsible, transparent use of AI and the privacy and protection of AI uses. Governance can take many forms from normative principles of AI ethnics to technical standards and soft laws to regulation and taxation.
A number of initiatives like, the OECD (AI) Policy Observatory, the global Partnership on AI (PAI) and the international congress for the governance of AI are actively working to support such international AI governance efforts, but much work remains, particularly in ensuring greater inclusivity, a more adequate global representation in AI decision making fora. This is where the United Nations can play an important role in bringing all concerned governments, the private sector, civil society, academia, the technological community to the same table to work together.
The United Nations Secretary General has made clear that how we address the challenges of the digital world is one of the key issues of our time. He (António Guterres) thus launched a roadmap for digital cooperation, which laid out a vision on key digital issues, such as universal connectivity, digital human rights and digital inclusion. In his roadmap, he specifically highlights AI as an area that needs greater global steerage. He proposed the establishment of a multi-stakeholder advisory body for global AI cooperation, and to advance the development and use of AI that is trustworthy, human rights based, safe, sustainable, and promotes peace.
This is part of important work being done by the broader UN family in this domain. With UNESCO working on global AI ethics standards, ITU on building capacity on AI for good, UNICEF on AI for children. The Secretary General has also called for a ban on the use of lethal, autonomous weapons.
But alongside standards, guidelines and bans, we should also find ways to support, and privilege, positive AI developments and stigmatize negative uses and development of AI, so that we can discourage the misuse of technology in ways that harm rather than serve humanity.
This has been the approach taken, for example, to nuclear research and development as well as in other areas of scientific development, such as in biochemistry, where certain forms of research are stigmatized. Inspired by these examples, I believe we should consider the recommendation made by Max Tegmark, the Future of Life Institute’s director, which was also articulated by the Secretary-General’s High Level Panel on Digital Cooperation to create a sort of Hippocratic oath for AI researchers and practitioners, similar to that taken by the medical practitioners, to put humans first and to do no harm.
Of all the emerging technologies, artificial intelligence stands alone as the one with the greatest potential to empower, but also to disrupt. This is why the stakes for international cooperation in this area are the highest. The fact that AI technology applications advance faster, than not only normative and regulatory frame works, but also faster than our ability to understand their impact on us, underscores the urgency of this call.
AI will be an essential tool in our journey towards a more prosperous future. We must ensure that it’s used in a transparent trust worthy manner that upholds human rights and human dignity, that promotes our safety and security, and fosters inclusive peace. That is our common task. Thank you!
For more details, visit www.tsinghuaaiforum.org