Number 2 below is particularly of note!
Using commercially available AI: The key takeaways:
1. Privacy obligations apply to personal information entered into, and output generated by, AI (if that output it contains personal information). APP Entities must embed privacy into their selection and use of any AI system that interfaces with personal information. That includes AI systems trained or tested on personal information, as well as those that will generate outputs containing personal information.
2. Even incorrect AI generated information about a reasonably identifiable individual will constitute personal information, and must be managed accordingly. This includes hallucinations.
3. Privacy Policies and Collection Notices should clearly outline when and how AI will access and use an individual's personal information, to enable informed consent.
4. Use of AI to generate or infer personal information must comply with Australian Privacy Principle (APP) 3 in relation to collection of personal information.
5. In accordance with APP 6, Personal Information should only be used or disclosed to AI for: the primary purpose for which it was collected (which should be narrowly framed), or otherwise with consent, or where the APP Entity can establish secondary use would be reasonably expected by the individual, and is related (or for sensitive information is directly related) to the primary purpose. In order to establish the secondary use was reasonably expected, best practice is to outline the proposed use in the APP Entity's Collection Notice and Privacy Policy.
6. The OAIC has explicitly confirmed it is best practice not to enter personal information in publicly available generative AI tools, such as chatbots.
Thanks Sonja Read and MinterEllison for the summary of OAIC guidance on Privacy Impact Assessments before deploying AI 👇
This week the Office of the Australian Information Commissioner (OAIC) published two guides about how privacy laws apply to AI. The OAIC has confirmed it is best practice not to enter personal information in publicly available generative AI tools and that Privacy Impact Assessments should be performed before a new #AI system is introduced.
We've summarised the key takeaways from the first guide to help Australian Privacy Principle Entities (APP Entities) deploy AI systems in a way that complies with their privacy obligations.
Find out more: https://lnkd.in/gnTZZuPD
Authors: Chelsea Gordon, Sam Burrett
AI Lead @ MinterEllison | Writing about productivity and artificial intelligence.
6moReally important updates from the OAIC here. It's clear that privacy should be a key consideration in the use of any publicly available AI tool.