Jacinda Ardern: Here's the model for governing AI
Comment on this storyComment Jacinda Ardern is former prime minister of New Zealand and the New Zealand prime minister’s special envoy for the Christchurch Call. Several months ago, I retired from politics. After five years leading a small but incredible country, I knew it was time for someone else to take the reins. My plan […] Jacinda Ardern is leading the charge against the growing development of AI technology, which could lead to disinformation campaigns and the end of humanity. She believes that the technology could be used to create tools that address issues such as terrorism, terrorism, and open letters. She recently spoke with President Emmanuel Macron to discuss the Christchurch Call Initiative on Algorithmic Outcomes, which was intended to provide better access to the kind of data needed to design online safety measures to prevent online terrorism. She also believes that collaboration on AI is the only option to address these issues, and that government alone can't do the job.

Published : 2 years ago by Dave Haslam in Tech
Like so many, I have been following the escalating development of AI and its promise of huge benefits for humanity — ranging from improved productivity to advances in medical science. But I have also been following the risks. The core technology that enables an AI assistant to describe an image to a vision-impaired person is the same technology that might enable disinformation campaigns — and this is the tip of the iceberg.
My interest in learning is matched by my drive to find answers. I want to know the destination for these tools and what they will mean for democracy and humanity. If these are also the answers you’re searching for, I can tell you that you won’t find them — not yet. It’s difficult to know the path we’re on when advocates tell us “everything is fine,” while others warn that AI might be the end of humanity.
On March 15, 2019, a terrorist took the lives of 51 members of New Zealand’s Muslim community in Christchurch. The attacker livestreamed his actions for 17 minutes, and the images found their way onto social media feeds all around the planet. Facebook alone blocked or removed 1.5 million copies of the video in the first 24 hours; in that timeframe, YouTube measured one upload per second.
New Zealand wasn’t the only nation grappling with the connection between violent extremism and technology. We wanted to create a coalition and knew that France had started to work in this space — so I reached out, leader to leader. In my first conversation with President Emmanuel Macron, he agreed there was work to do and said he was keen to join us in crafting a call to action.
While this multi-stakeholder approach isn’t always easy, it has created change. We have bolstered the power of governments and communities to respond to attacks like the one New Zealand experienced. We have created new crisis-response protocols — which enabled companies to stop the 2022 Buffalo attack livestream within two minutes and quickly remove footage from many platforms. Companies and countries have enacted new trust and safety measures to prevent livestreaming of terrorist and other violent extremist content. And we have strengthened the industry-founded Global Internet Forum to Counter Terrorism with dedicated funding, staff and a multi-stakeholder mission.
We’re also taking on some of the more intransigent problems. The Christchurch Call Initiative on Algorithmic Outcomes, a partnership with companies and researchers, was intended to provide better access to the kind of data needed to design online safety measures to prevent radicalization to violence. In practice, it has much wider ramifications, enabling us to reveal more about the ways in which AI and humans interact.
Perhaps the most useful thing the Christchurch Call can add to the AI governance debate is the model itself. It is possible to bring companies, government officials, academics and civil society together not only to build consensus but also to make progress. It’s possible to create tools that address the here and now and also position ourselves to face an unknown future. We need this to deal with AI.
There will be those who are cynical about cooperation or who believe that working together weakens accountability for all. I disagree. For the Christchurch Call, governments have had to accept their roles in addressing the roots of radicalization and extremism. Tech partners have been expected to improve content moderation and terms of service to address violent extremism and terrorist content. Researchers and civil society have had to actively apply data and human rights frameworks to real-world scenarios.
After this experience, I see collaboration on AI as the only option. The technology is evolving too quickly for any single regulatory fix. Solutions need to be dynamic, operable across jurisdictions, and able to quickly anticipate and respond to problems. There’s no time for open letters. And government alone can’t do the job; the responsibility is everyone’s, including those who develop AI in the first place.
Topics: AI