AI Risk Management is the process of identifying, assessing, and managing the novel risks associated with AI systems. Risk management can address both technical and governance aspects of a system in order to reduce financial risk, repetitional risk, and legal liability by implementing guardrails to prevent harms from occurring and establishing robust mechanisms in the case that unforeseen risks do occur. These include having clear lines of accountability and liability throughout the supply chain.
AI Risk Management can also help to ensure compliance with ethical and legal standards, as well as internal and external policies. Through this compliance, AI Risk Management can help to balance innovation and safety of AI systems by ensuring that there are the appropriate safeguards in place to innovate responsibly, with social and technical best practices in mind.
Broadly, there are five key technical risks that should be managed:
An inventory is created of the systems used and they are evaluated for inherent risk based on the type of system and context that they are used in. Systems that have implications for critical decisions that can affect someone’s life changes are considered as having a high risk, while systems that do not have many implications for someone’s life chances would be considered as low risk. These risks can also be divided into the technical risks, as outlined above, and governance risks such as accountability, management, compliance, and associated processes.
In-depth qualitative and quantitative assessments are carried out to gather additional information about any strategies that have already been put in place to measure or mitigate risks to determine the residual risk associated with the system across the different risk verticals. This could, therefore, result in the risk level of the system being updated from the inherent risk level, ideally remaining the same or being lowered.
Once the residual risk of a system has been evaluated and outstanding risks identified, steps can be taken to mitigate these risks. Recommendations provided in this step are bespoke and targeted, based on the specifications of the system and the specific risks associated with it.
Once mitigations have been effectively implemented, the system is assured and can be given certification when conducted in accordance with specific frameworks.
Once assured, a system should be continuously monitored and reevaluated periodically, but particularly after major updates, to ensure that any changes made to the system do not introduce additional risks. Monitoring can also help to verify that mitigations continue to be effective.
AI Risk Management is the process of identifying, assessing, and managing the novel risks associated with AI systems. Risk management can address both technical and governance aspects of a system in order to reduce financial risk, repetitional risk, and legal liability by implementing guardrails to prevent harms from occurring and establishing robust mechanisms in the case that unforeseen risks do occur. These include having clear lines of accountability and liability throughout the supply chain.
AI Risk Management can also help to ensure compliance with ethical and legal standards, as well as internal and external policies. Through this compliance, AI Risk Management can help to balance innovation and safety of AI systems by ensuring that there are the appropriate safeguards in place to innovate responsibly, with social and technical best practices in mind.
AI Risk Management is important for preventing harms resulting from the use of AI without the appropriate guardrails, and can help to increase trust in systems. AI Risk Management can also support with creating an inventory of all of the AI systems in use within an organization, creating greater visibility throughout the organization and helping to map out needs or redundancies.
There have already been a number of high-profile cases of AI and automated systems resulting in actual or potential harm or violating existing laws where the appropriate risk management practices have not been implemented and compliance has not been prioritized. However, AI risk management can help these mistakes to be avoided, upholding the integrity of AI systems and building trust.
Under the EU AI Act, systems that are considered high-risk —that can have a significant impact on the lives of users — will be required to comply with stringent obligations, one of which includes implementing a risk management system. While standards for the EU AI Act are still being developed by CEN-CENELEC that will define exactly what the risk management system must look like, the current text states that the risk management system bust be continuous and iterative, in place throughout the entire lifecycle of the system.
Moreover, there are laws emerging in the US that require risk management of AI systems, including Colorado’s SB 21-169, which prohibits unfair discrimination resulting from the use of external consumer information sources, algorithms, and predictive models used in insurance practices. Specifically, the law restricts insurers’ use of external consumer information and data sources - such as credit scores, social media habits, locations, purchasing habits, home ownership, educational attainment, occupation, licensures, civil judgments, and court records - as well as algorithms and predictive models using this data. The Colorado Commissioner of Insurance is required to consult with stakeholders to develop rules for specific insurance types and insurance practices, and the rulemaking process is currently underway for life insurance and private passenger auto insurance.
Regulation 10-1-1 regarding life insurance went into effect on 14 November 2023, under which life insurers using external consumer information sources, algorithms, and predictive models must establish a governance and risk management framework to support policies, procedures and systems to determine whether the use of such data could result in unfair discrimination. The insurer must also create and document a process for selecting and overseeing all external resources and third-party vendors as part of this framework.
The National Institute of Standards and Technology’s AI Risk Management Framework 1.0 (AI RMF 1.0) aims to reduce and manage harms AI systems can have on people, organizations and the ecosystem at large through a voluntary framework. Essentially, the AI RMF provides a guidance-based framework for organizations developing and deploying trustworthy AI systems across use-cases and sectors.
Within this framework, NIST identifies 7 key principles that characterize trustworthy AI systems: safe, secure and resilient, explainable and interpretable, privacy enabled, fair, accountable and transparent, and valid and reliable.
Guidance for implementing these 7 principles are provided in the RMF Core provides a series of actions organizations can adopt and adapt to effectively risk manage their algorithms. Specifically, the core is based around four key pillars:
Establishing policies, procedures, accountability structures, engagement protocols, as well as prioritising workforce Diversity, Equity and Inclusion considerations
Categorising and mapping AI systems contextually, comparing a system's capabilities, risks, benefits and relative impact with appropriate benchmarks
Applying context-specific metrics and methods to evaluate a system's trustworthy characteristics over time, as well as critiquing analytic methods
Using such methods and metrics to triage and respond to AI risks, manage third-party risks and benefits, and document and monitor risk mitigation plans
It is never too early or too late to implement AI Risk Management – it can be applied throughout the lifecycle of AI systems. During the design and development stage, AI risk management can help to ensure that safeguards are built into a system by anticipating and mitigating possible risks and that there are failsafes in the event that something does go wrong.
Pre-deployment, AI risk management can ensure that AI systems have the appropriate documentation and means for transparency and accountability to enable confident use of the system and a clear route to recourse if any issues arise. Once deployed, AI risk management is important in the ongoing monitoring of systems to ensure that they perform and are being applied as intended, and so that any negative consequences are spotted and mitigated early, before they potentially result in a serious impact or become a liability.