Three questions about Generative AI we need to answer

Recently, the field of Artificial Intelligence (AI) has garnered high levels of attention due to the public release of Generative AI tools. At the UN Department of Political and Peacebuilding Affairs (UN DPPA), AI has been on the radar for some years, both as a policy issue and a practical tool. In its first-ever meeting to focus on AI, the Security Council in July 2023 highlighted the transformative opportunities AI offers for addressing global challenges, but also the risks it poses, including its potential to intensify conflict through the spread of misinformation and malicious cyber operations. Like many other organizations, the UN is responding to these developments and engaging early by hosting discussions with leading experts and looking into policy challenges in this regard.

Reflecting on the experience with Generative AI, the UN sees three key questions that need to be addressed: How might generative AI change peace and security landscapes? How can generative AI enable achievements that were previously unattainable? What does it mean to use generative AI ethically?

Generative AI tools enable both positive and adversarial uses. They can rapidly spread disinformation by generating fake media, and can also be used by armed forces. Generative AI is a critical technology for many countries, and there is a competition to establish leadership in this area. However, generative AI can also enable mutual understanding among actors, facilitate localized solutions, and provide data-driven insights into historical conflicts. The full impact of AI on peace and security is yet to come, and it is important to monitor and record how generative AI is applied, its effects on peace and stability, and engage in foresight exercises to anticipate challenges and opportunities.

Generative AI has the potential to enable achievements that were previously unattainable. It can transform the way peace and security mandates are delivered in the UN, and has already shown promising cases of supporting diplomacy and citizen engagement. Generative AI can improve work efficiency in the public sector and enable better service for the public. Other potential use cases include AI-powered fact-checking, helping counter misinformation campaigns, and AI models assisting diplomats in gaining nuanced understandings of political issues.

Using generative AI ethically raises ethical considerations. UNESCO has adopted recommendations on the ethics of AI, and these recommendations have also been examined in the context of generative AI tools. Ethical questions range from attribution for generating unethical content to biases and cultural sensitivity in AI models. The challenges of bias, representation, and cultural sensitivity become more prominent in the peace and security domain. The lack of clear metrics to measure ethical standards and the need for oversight and access to core datasets and algorithms are areas that require scientific research and addressing.

With the evolution of generative AI, there is both promise and ethical dilemmas. Real-world applications demonstrate the complexities of balancing protection and truth, benefits and risks. Stronger cooperation is needed to foster responsible governance in the AI field. The UN can be an ideal platform to discuss these issues, involving various actors from states to civil society, academia, and the private sector.

Next
Previous