Understand the layers of trusted AI
As organizations become more familiar with the use (or misuse) of AI, they are realizing how many things can go wrong with AI projects. However, there are challenges when it comes to deepfakes and the misuse of AI for deception. However, there are also aspects such as biased datasets, the potential for misuse of copyrighted data, and the potential for AI systems to cause real harm. In an effort to make AI systems more ethical and trustworthy, organizations are beginning to recognize the broader scope and considerations under the term “trustworthy AI.”
Fundamentals of trustworthy AI
As detailed in the recent Cognilytica AI Today podcast on this topic, anyone considering developing or leveraging an AI system must maintain trust, provide visibility and transparency, and ensure oversight and We need to strengthen accountability and increase explainability and understanding of how AI systems work. .
This is because organizations are increasingly using AI to enable a wide range of increasingly mission-critical applications. These AI systems can have a significant impact on people's daily lives and livelihoods. Therefore, trustworthy AI is needed to keep an organization's customers, employees, users, partners, stakeholders, shareholders, and itself safe.
Similarly, people have great fears and concerns about AI. These aspects need to be addressed to build and maintain trust. Organizations should not spend time, money, and resources just to make people uncomfortable or distrust the AI systems they build. This leads to very expensive failures.
Lack of visibility into AI systems also causes anxiety among people. We are often asked to just blindly trust an AI system without knowing what is going on with it or how it was created. This also limits disclosure and consent when using AI systems.
Additionally, the power of AI presents a real possibility for bad actors to do bad things. Machines can malfunction or do things that can cause real harm, but humans can also use AI to cause just as serious or even greater harm. Imposing limits, controls, safeguards, and guardrails can give you some edge over these issues. It also allows for monitoring, management, providing control, testing, and of course keeping humans in the loop.
All of these different aspects of Trustworthy AI need to be considered in a holistic way to avoid piecemeal approaches to protecting people and systems.
What are the layers of Trustworthy AI?
Rather than considering each aspect of trustworthy AI in isolation, it can be treated as different aspects or layers of trustworthy AI that can be addressed in a comprehensive manner. In 2020, research firm Cognilytica evaluated and analyzed more than 60 different frameworks for ethical, responsible, and trustworthy AI across a wide range of organizations, countries, and businesses. Many of the concepts and terminology in these frameworks were confusing, contradictory, and often at varying levels of detail.
In response, Cognilytica has put together a comprehensive approach to Trustworthy AI. This deals with different aspects of Trustworthy AI in different layers, leaving no aspects unaddressed, and addressing them more consistently.
The five main layers of trustworthy AI addressed by the comprehensive trustworthy AI framework address:
- Ethical aspects of AI – Guidelines for AI systems to participate in society in a positive way. This includes aspects of non-harm and human values, providing positive benefits to humans, addressing issues of bias, diversity and inclusion, and ensuring human control, freedom and agency. It is included.
- Responsible use of AI – The potential for misuse or abuse of AI, including aspects such as AI safety and privacy, trust, human accountability, and reducing the risk and ability of AI systems to be used in harmful, inappropriate, or malicious ways. Deal with sexuality.
- Systematic AI transparency – Instill trust in AI systems by providing visibility into overall system behavior, data and AI configuration, disclosure and user consent, visibility into and potential mitigation of bias, and use of open systems. Helps increase.
- AI governance – Focuses on implementing processes, controls, and safeguards for AI systems, auditing and monitoring AI systems, and third-party regulation and system certification.
- Explainability of algorithms – Reduce the “black box” of AI systems by providing a means to understand how machines reach their conclusions, including guidelines for explainability, interpretability, and/or understanding of algorithms. ” approach to reduce.
While there are many competing approaches to trustworthy, ethical, and responsible AI, a comprehensive approach that provides guidance to the widest possible audience will help achieve the objective of trustworthy AI. .
What are the characteristics of Trustworthy AI that should be put into practice?
Creating trustworthy AI is more than just a few individuals putting ideas down on paper. It must be translated into real-world implementation across the organization. Regardless of your approach to Trustworthy AI, the key is to focus on making it practical and implementable. This is the only way to maintain the trust of users, customers, employees, and all stakeholders in the AI world we now live in.
By focusing on the five main layers outlined above, these become the characteristics and guidelines to follow when implementing a comprehensive Trustworthy AI framework.
(Disclosure: I am a co-host of the AI Today podcast and a managing partner at Cognilytica)