Last week’s AI Safety Summit, held at Bletchley Park in the U.K., was all about acknowledgement, action, adaption, and, most importantly, trust, says François-Philippe Champagne, Canadian federal minister of innovation, science and industry.
Champagne, who made the remarks following the completion of the summit, which involved 28 nations and led to the signing of the Bletchley Declaration, said, “it is critical and urgent that we all come together to build public trust around this transformational technology.
“Canada was the first country to adopt a national AI strategy, we recently launched a voluntary AI code of conduct for advanced AI systems, and we are moving ahead with one of the first AI laws in the world. We look forward to continuing our work with like-minded countries to move confidently from fear to opportunity.”
The industry’s view of the declaration is also positive. Joseph Thacker, researcher with Software-as-a-Service (SaaS) security platform developer AppOmni, said, “experts are split on the actual concerns around AI destroying humanity, but it’s clear that AI is an effective tool that can (and will) be used by forces for both good and bad.
“The Bletchley Declaration acknowledges the potential of AI in improving human wellbeing, fostering innovation, and protecting human rights. At the same time, it recognizes the risks, including those from frontier AI, and emphasizes the need for risk-based policies and safety testing.”
Thacker said the emphasis on international cooperation is particularly noteworthy. “This is a worldwide problem. AI risks are international in nature, and addressing them effectively will require coordinated action across borders. The commitment to a global dialogue and supporting scientific research on frontier AI safety is a big deal.”
Imran Ahmad, head of technology and co-head of information governance, privacy and cybersecurity with Canadian law firm Norton, Rose Fulbright, described the Declaration as a “step in the right direction.
“It allows countries to have a framework for their respective jurisdictions, but, more importantly, allows businesses to have more certainty around what standards they should be trying to meet from a best practices standpoint.”
In a research document released in July, OpenAI wrote that advanced AI models hold the “promise of tremendous benefits for humanity, but society needs to proactively manage the accompanying risks. In this paper, we focus on what we term ‘frontier AI’ models: highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety.
“Frontier AI models pose a distinct regulatory challenge: dangerous capabilities can arise unexpectedly; it is difficult to robustly prevent a deployed model from being misused; and it is difficult to stop a model’s capabilities from proliferating.”
In announcing the signing of the Bletchley Declaration, U.K. prime minister Rishi Sunak described it as a “landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI – helping ensure the long-term future of our children and grandchildren.”
This isn’t the end of the initiatives. South Korea will hold what is being described in a release issued by the U.K. government as a “mini virtual summit on AI” within the next six months, and France will then host the next in-person Summit in a year from now.The post AI Safety Summit at Bletchley Park all about trust: Champagne first appeared on IT World Canada.