SoatDev IT Consulting
SoatDev IT Consulting
  • About us
  • Expertise
  • Services
  • How it works
  • Contact Us
  • News
  • February 23, 2024
  • Rss Fetcher

Artificial intelligence has invaded industries and companies of all sizes, but the backstory of what makes these tools powerful and erratic alike remains somewhat obscure.
Understanding how and why AI systems arrive at their outputs, Forrester explained in a new report, is a critical transparency mechanism, called explainable AI. And that is key for enterprises to minimize the trust gap in AI systems across all stakeholders.
Companies that have attained a higher level of AI maturity wherein they start to leverage opaque methods like neural networks for additional predictive power are the most concerned with explainability challenges, the report explained.
These neural networks are the only way to analyze text, images, and video at scale, so industries with use cases involving unstructured data will be more inclined to invest in explainability. At the same time, these companies will have even more regulatory exposure.
However, with the explosion of generative AI-enabled natural language interactions, eventually all companies will need to invest in explainability, the report noted.
Regulatory compliance is one factor, but explainability can also help companies unlock the business value of their AI algorithms. For example, in a use case like credit determination, explainability can inform future credit risk models. Customer insights is another big one that enterprises are eyeing in order to drive business value.
Furthermore, explainability can drive trust among employees who use AI systems to perform daily functions. AI adoption suffers significantly when employees do not at least have a minimum understanding of how the system produces results. Forrester’s 2023 Data and Analytics survey, in fact, showed that 25 per cent of data and analytics decision makers say that lack of trust in AI systems is a major concern in using AI.
To achieve these outcomes with explainability, researchers have developed interpretability techniques like SHAP and LIME, which are open source and which are widely used by data scientists. Many larger machine learning platform vendors also offer explainable AI capabilities on top of their existing model development functionality. These vendors also serve other responsible AI needs like model interpretability, bias detection, model lineage and more.
But data scientists are not the only ones who will need explainability. AI governance teams need model intelligence platforms that provide responsible AI assessments to oversee an enterprise’s use of AI.
Business users can also use these model intelligence platforms, or they can use machine learning engines with explainable AI techniques, especially for high-risk or highly regulated use cases such as credit determination and hiring.
Forrester recommends enterprises seeking explainability to do the following:

Look at the different AI use cases, classify risk accordingly, and then define explainability requirements for each tier. Companies, for instance, have been borrowing from the EU’s AI Act that classifies AI systems into four categories — unacceptable risk, high risk, medium risk, and low risk. As a result, high-risk use cases, for example, may require complete transparency, while interpretability may suffice for moderate risk use cases.
Demand explainability from AI vendors and beware of the black box, as you may be held accountable for any vulnerabilities or flaws. 
Ensure that explainability goes beyond individual models and covers how the entire system works — the interoperability of all the pieces — and measure business outcomes and customer satisfaction as well as model performance to ensure the system is delivering as expected.

The full Forrester report is available for purchase here.The post Making AI explainable to bridge trust gaps: Forrester weighs in first appeared on IT World Canada.

Previous Post
Next Post

Recent Posts

  • Heybike’s Alpha step-through e-bike is an affordable, all-terrain dreamboat
  • U.S. lawmakers have concerns about Apple-Alibaba deal
  • Microsoft’s Satya Nadella is choosing chatbots over podcasts
  • MIT disavows doctoral student paper on AI’s productivity benefits
  • Laser-powered fusion experiment more than doubles its power output

Categories

  • Industry News
  • Programming
  • RSS Fetched Articles
  • Uncategorized

Archives

  • May 2025
  • April 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023

Tap into the power of Microservices, MVC Architecture, Cloud, Containers, UML, and Scrum methodologies to bolster your project planning, execution, and application development processes.

Solutions

  • IT Consultation
  • Agile Transformation
  • Software Development
  • DevOps & CI/CD

Regions Covered

  • Montreal
  • New York
  • Paris
  • Mauritius
  • Abidjan
  • Dakar

Subscribe to Newsletter

Join our monthly newsletter subscribers to get the latest news and insights.

© Copyright 2023. All Rights Reserved by Soatdev IT Consulting Inc.