What should an AI ethics governance framework look like?


While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.

As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.

AI is also high on the list of United States government concerns. In late February, Speaker Mike Johnson and Democratic Leader Hakeem Jeffries announced the establishment of a bipartisan Task Force on AI to explore how Congress can ensure that America continues to lead global AI innovation. The Task Force will also consider the guardrails required to safeguard the nation against current and emerging threats and to ensure the development of safe and trustworthy technology.

Clearly, good governance is essential to address AI-associated risks. But what does sound AI governance look like? A new IBM-featured case study by Gartner provides some answers. The study details how to establish a governance framework to manage AI ethics concerns. Let’s take a look.

Why AI governance matters

As businesses increasingly adopt AI into their everyday operations, the ethical use of the technology has become a hot topic. The problem is that organizations often rely on broad corporate principles, combined with legal or independent review boards, to assess the ethical risks of individual AI use cases.

However, according to the Gartner case study, AI ethical principles can be too broad or abstract. Then, project leaders struggle to decide whether individual AI use cases are ethical or not. Meanwhile, legal and review board teams lack visibility into how AI is actually being used in business. All this opens the door to unethical use (intentional or not) of AI and subsequent business and compliance risks.

Given the potential impact, the problem must first be addressed at a governance level. Then, subsequent organizational implementation with the appropriate checks and balances must follow.

Four core roles of AI governance framework

As per the case study, business and privacy leaders at IBM developed a governance framework to address ethical concerns surrounding AI projects. This framework is empowered by four core roles:

  1. Policy advisory committee: Senior leaders are responsible for determining global regulatory and public policy objectives, as well as privacy, data and technology ethics risks and strategies.

  2. AI ethics board: Co-chaired by the company’s global AI ethics leader from IBM Research and the chief privacy and trust officer, the Board comprises a cross-functional and centralized team that defines, maintains and advises about IBM’s AI ethics policies, practices and communications.

  3. AI ethics focal points: Each business unit has focal points (business unit representatives) who act as the first point of contact to proactively identify and assess technology ethics concerns, mitigate risks for individual use cases and forward projects to the AI Ethics Board for review. A large part of AI governance hinges upon these individuals, as we’ll see later.

  4. Advocacy network: A grassroots network of employees who promote a culture of ethical, responsible and trustworthy AI technology. These advocates contribute to open workstreams and help scale AI ethics initiatives throughout the organization.

Explore AI cybersecurity

Risk-based assessment criteria

If an AI ethics issue is identified, the Focal Point assigned to the use case’s business unit will initiate an assessment. The Focal Point executes this process on the front lines, which enables the triage of low-risk cases. For higher-risk cases, a formal risk assessment is completed and escalated to the AI Ethics Board for review.

Each use case is evaluated using guidelines including:

  • Associated properties and intended use: Investigates the nature, intended use and risk level of a particular use case. Could the use case cause harm? Who is the end user? Are any individual rights being violated?

  • Regulatory compliance: Determines whether data will be handled safely and in accordance with applicable privacy laws and industry regulations.

  • Previously reviewed use cases: Provides insights and next steps from use cases previously reviewed by the AI Ethics Board. Includes a list of AI use cases that require the board’s approval.

  • Alignment with AI ethics principles: Determines whether use cases meet foundational requirements, such as alignment with principles of fairness, transparency, explainability, robustness and privacy.

Benefits of an AI governance framework

According to the Gartner report, the implementation of an AI governance framework benefited IBM by:

  • Scaling AI ethics: Focal points drive compliance and initiate reviews in their respective business units, which enables an AI ethics review at scale.

  • Increasing strategic alignment of AI ethics vision: Focal points connect with technical, thought and business leaders in the AI ethics space throughout the business and across the globe.

  • Expediting completion of low-risk projects and proposals: By triaging low-risk services or projects, focal points enable the capability to review projects faster.

  • Enhancing board readiness and preparedness: By empowering focal points to guide AI ethics early in the process, the AI Ethics Board can review any remaining use cases more efficiently.

With great power comes great responsibility

When ChatGPT debuted in June 2020, the entire world was abuzz with wild expectations. Now, current AI trends point towards more realistic expectations about the technology. Standalone tools like ChatGPT may capture popular imagination, but effective integration into established services will engender more profound change across industries.

Undoubtedly, AI opens the door to powerful new tools and techniques to get work done. However, the associated risks are real as well. Elevated multimodal AI capabilities and lowered barriers to entry also invite abuse: deepfakes, privacy issues, perpetuation of bias and even evasion of CAPTCHA safeguards may become increasingly easy for threat groups.

While bad actors are already using AI, the legitimate business world must also take preventative measures to keep employees, customers and communities safe.

ChatGPT says, “Negative consequences might encompass biases perpetuated by AI algorithms, breaches of privacy, exacerbation of societal inequalities or unintended harm to individuals or communities. Additionally, there could be implications for trust, reputation damage or legal ramifications stemming from unethical AI practices.”

To protect against these types of risks, AI ethics governance is essential.

The post What should an AI ethics governance framework look like? appeared first on Security Intelligence.