Moving at the speed of light – which AI risk assessment framework should you use?

Which AI risk assessment framework should you use?

If you feel like you’re suffering whiplash from the sheer pace of technological change, especially around the use of AI – you’re not alone.

The different regulatory responses to emerging technologies are piling up, with more guidance and frameworks than you can poke a stick at. The Office of the Australian Information Commissioner’s guidance, released earlier this week, and the Australian Government’s Voluntary AI Safety Standard, released in September, are just the latest to add to the reading pile.

Our privacy advisory team members have decades of experience applying a privacy risk assessment lens to a huge variety of projects, business processes and systems, utilising different technologies, disparate use cases, and across multiple industries and jurisdictions. Increasingly, our clients are asking us to assess projects which involve the development, procurement or deployment of AI. Meanwhile we’re also often asked: which guidance should our organisation follow? Is a standard Privacy Impact Assessment (PIA) framework suitable for AI projects?

We sifted through a number of AI risk assessment frameworks to review their pros and cons – comparing multiple artefacts produced by policy-makers, regulators and other industry bodies, which each seek to guide regulated entities about how they should manage privacy and related risks arising from AI.

We approached this task with two questions in mind:

      • If I was managing an AI project, how easily could I pick up this artefact and use it to assess my project risk? and

      • How useful are the takeaways for someone working in industry?

    The result of that project is our comparative analysis report, which offers a review of 14 different frameworks, from State, national and international  privacy regulators, national government bodies, and more.

    The challenge with AI frameworks

    As an overall comment, not all frameworks are created equal.

    Some focus on high level principles, some focus just on governance, while others dive deep into practical risk assessment methodologies. Some focus on all types of risks for AI projects, while others focus only on privacy risks. Some offer templates and tools, others are more theoretical.

    Some frameworks use terminology we found to be potentially confusing or misleading for readers. Some don’t do a good job of distinguishing between use cases, which really limits their utility for risk assessment, because when seeking to use personal information, the legal tests depend on whether you are talking about using your customers’ personal information for training AI (e.g. for machine learning), or deploying an already developed AI tool into operation. And as you might expect, many frameworks are jurisdiction-specific.

    An overview of key AI frameworks

    If you’re looking for guidance on AI governance, the ASEAN framework is a good place to start. It is not specific to any jurisdiction, and it includes an AI Risk Assessment Template, with some common risks and controls. This is a less intimidating option than others for someone not used to doing a risk assessment.

    As far as privacy-focussed risk assessment tools go, the UK’s ICO offers a very practical AI and data protection toolkit, with an Excel spreadsheet you can use. However, it is specific to the UK’s GDPR.

    Closer to home and also from privacy regulators, OVIC has a jurisdiction-specific and privacy-focussed resource, more closely aligned for those in other Australian jurisdictions. This is a great resource if you are already conducting PIAs, because it summarises what your privacy considerations should be for AI projects.

    The NSW IPC is also developing specific guidance on undertaking PIAs on AI systems and projects for NSW agencies – another resource which should highlight AI-related privacy risks for you to consider when doing a PIA.

    The NSW Government’s AI Assurance Framework (AIAF, now at version 2) is often held up as ground-breaking, but we have strong reservations about it. Two of the strengths of the NSW AIAF are its mandatory application to NSW government projects, and how it escalates high risk projects to review by an independent advisory body. However, the challenge is its utility in helping project teams identify privacy risks or compliance requirements in practice.

    We have highlighted some concerns about terminology which doesn’t match privacy legislation, guidance about when to conduct a PIA which is at odds with the advice of the NSW privacy regulator, and a narrow-cast approach to defining or testing for privacy risk. For example, the NSW AIAF relies on the existence of ‘sensitive information’ (rather than any personal information) as the threshold for when to consider privacy risk, and it assumes that re-identifiability of ‘de-identified’ data being used for machine learning is the key privacy risk to control for. In our view, NSW public sector agencies should not rely on the AIAF to determine their privacy risk.

    The Australian ‘national framework’, released in June this year as a joint initiative of Australian, state and territory governments, was ultimately disappointing because it offered a set of high level principles, without any tools to help project teams. Our initial response was ‘great, but now what?’

    Happily, the newer DISR Voluntary AI Safety Standard starts to put flesh on the bones for industry. The Standard is intended to give practical guidance to Australian organisations on how to use AI in a safe and responsible manner. Intended to support human-centred AI deployment, the Standard has a focus on risk management processes, data governance, transparency, accountability and stakeholder engagement.

    The Standard includes 10 guardrails that apply throughout the AI lifecycle and supply chain, which are designed to help organisations identify AI risks and provide practical guidance and requirements to mitigate and manage these risks. However, the Standard doesn’t include any assessment tools, and its casting of privacy risks is narrower than we would like to see.

    By contrast, the New Zealand Government’s Algorithm Impact Assessment toolkit earns a ‘two thumbs up’ review from us.

    A particular strength is its broader application than just AI, covering algorithmic impact assessment. It’s worth remembering that not all automated decision-making or algorithmic systems use AI, and that you don’t need AI to cause great harm – Robodebt being a classic example of a devastating project which sounds like it used AI, but didn’t.

    The NZ toolkit includes a suite of practical resources like a questionnaire, a risk assessment report template and a user guide, and incorporates Māori and cultural considerations in the context of data use and algorithms.

    And then last but not least, the OAIC has just released two sets of very welcome guidance for Australian organisations.  

    Of particular benefit is how the OAIC’s guidance helps organisations understand that using personal information to develop or train AI needs to be assessed as its own use case against privacy obligations, separate to using personal information in the actual deployment of an AI system. While many organisations will never see themselves as developing AI, risks still arise when purchasing commercially-available AI. To the extent that your customer data could be used to train the AI model of your vendor, that secondary use of your customer data brings compliance challenges, distinct from your primary purpose of deploying the AI model into operation.

    One of the two OAIC guides focuses on generative AI (GenAI).  The guidance highlights the material and complex privacy risks associated with publicly available GenAI tools, and the OAIC recommends organisations do not input any personal information into these tools. 

    The OAIC also makes it clear that the generation or inference of personal information via AI systems is a fresh ‘collection’ of personal information, and as such must comply with APP 3. The purpose(s) for which personal information will be used or disclosed by the AI system must also be assessed against APP 6 to ensure it is lawful. Accuracy of AI outputs is also highlighted as a privacy risk that must be managed.  

    Whether developing, training or deploying AI, the OAIC guidance reiterates the need to carry out a PIA. Their guidance should be considered when carrying out PIAs, to ensure particular privacy risks that arise in this context are identified and managed in line with regulatory expectations.

    So, which framework should you use?

    Of course it will depend on which jurisdiction you are in, and whether you are looking for a standalone AI risk assessment methodology which pulls in privacy as well as other project risks, or an AI ‘add-on’ to your existing PIA framework.

    As a great place for Australian organisations to start, start with the new OAIC guidance for an overview of privacy risks, including useful checklists. Then expand into thinking about AI project management more broadly with the new DISR Voluntary AI Safety Standard.  For a comprehensive suite of pragmatic tools, you could adapt the OAIC guidance and DISR Standard into the format similar to that used in the NZ government’s toolkit.

    And two final takeaways: a constant we found across all the frameworks and assessment tools is that there’s no getting around the need to conduct a PIA, to effectively identify and manage privacy risks created by the use of AI; and that organisations need to appreciate how AI systems can create novel privacy risks, and exacerbate existing privacy risks.

    Want to learn pragmatic skills to risk assess AI systems? Join our inaugural small group workshop, Assessing Privacy Risks in AI, on 29 and 31 October.

    Need a deeper dive into AI governance? Join the next AIGP certification program on 6 + 7 + 13 + 14 November.

    Subscribe

    Subscribe

    This field is for validation purposes and should be left unchanged.