SoatDev IT Consulting
SoatDev IT Consulting
  • About us
  • Expertise
  • Services
  • How it works
  • Contact Us
  • News
  • November 27, 2023
  • Rss Fetcher

Eighteen countries, including Canada, the U.S. and the U.K., today agreed on recommended guidelines to developers in their nations for the secure design, development, deployment, and operation of artificial intelligent systems.
It’s the latest in a series of voluntary guardrails that nations are urging their public and private sectors to follow for overseeing AI in the absence of legislation. Earlier this year, Ottawa and Washington announced similar guidelines for each of their countries.
The release of guidelines comes as businesses release and adopt AI systems that can affect people’s lives, without national legislation.
The latest document, Guidelines for Secure AI System Development, is aimed primarily at providers of AI systems who are using models hosted by an organization or are using external application programming interfaces (APIs).
“We urge all stakeholders (including data scientists, developers, managers, decision-makers, and risk owners) to read these guidelines to help them make informed decisions about the design, development, deployment and operation of their AI systems,” says the document’s introduction.
The guidelines follow a ‘secure by default’ approach, and are aligned closely to practices defined in the U.K. National Cyber Security Centre’s secure development and deployment guidance, the U.S. National Institute for Standards and Technology’s Secure Software Development Framework, and secure by design principles published by the U.S. Cybersecurity and Infrastructure Security Agency and other international cyber agencies.
They prioritize
— taking ownership of security outcomes for customers;
— embracing radical transparency and accountability;
— and building organizational structure and leadership so secure by design is a top business priority.
Briefly
— for safe design of AI projects, the guideline says IT and corporate leaders should understand risks and threat modelling, as well as specific topics and trade-offs to consider on system and model design;
— for secure development, it is recommended organizations understand AI in the context of supply chain security, documentation, and asset and technical debt management;
— for secure deployment, there are recommendations covering the protection of infrastructure and models from compromise, threat, or loss, developing incident management processes, and responsible release;
— for secure operation and maintenance of AI systems, there are recommendations for actions such as including logging and monitoring, update management, and information sharing.
Other countries endorsing these guidelines are Australia, Chile, Czechia, Estonia, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, Poland, South Korea and Singapore.
Meanwhile, in Canada, the House of Commons Industry Committee will resume hearings Tuesday on Bill C-27, which includes not only an overhaul of the existing federal privacy legislation, but also a new AI bill. So far, most of the witnesses have focused on the proposed Consumer Privacy Protection Act (CPPA). But several witnesses say the proposed Artificial Intelligence and Data Act (AIDA) deals with so many complex issues it should be split from C-27. Others argue the bill is good enough for the time being.
The government still hasn’t produced the full wording of amendments it’s willing to make to AIDA and CPPA to make the bills clearer.
AIDA will regulate what the government calls “high-impact systems,” such as AI systems that make decisions on loan applications or on an individual’s employment. The government says AIDA will make it clear that those developing a machine learning model intended for high-impact use have to ensure that appropriate data protection measures are taken before it goes on the market.
Also, the bill will clarify that developers of general-purpose AI systems like ChatGPT would have to establish measures to assess and mitigate risks of biased output before making the system live. Managers of general-purpose systems would have to monitor for any use of the system that could result in a risk of harm or biased output.
Meanwhile, the European Union is in the last stages of finalizing wording of its AI Act, which would be the first in the world. According to a news story, ideally this would be worked out by February, 2024. However, there are disagreements over how foundation models like ChatGPT should be regulated.The post Canada, U.S. sign international guidelines for safe AI development first appeared on IT World Canada.

Previous Post
Next Post

Recent Posts

  • Lawyers could face ‘severe’ penalties for fake AI-generated citations, UK court warns
  • At the Bitcoin Conference, the Republicans were for sale
  • Week in Review: Why Anthropic cut access to Windsurf
  • Will Musk vs. Trump affect xAI’s $5 billion debt deal?
  • Superblocks CEO: How to find a unicorn idea by studying AI system prompts

Categories

  • Industry News
  • Programming
  • RSS Fetched Articles
  • Uncategorized

Archives

  • June 2025
  • May 2025
  • April 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023

Tap into the power of Microservices, MVC Architecture, Cloud, Containers, UML, and Scrum methodologies to bolster your project planning, execution, and application development processes.

Solutions

  • IT Consultation
  • Agile Transformation
  • Software Development
  • DevOps & CI/CD

Regions Covered

  • Montreal
  • New York
  • Paris
  • Mauritius
  • Abidjan
  • Dakar

Subscribe to Newsletter

Join our monthly newsletter subscribers to get the latest news and insights.

© Copyright 2023. All Rights Reserved by Soatdev IT Consulting Inc.