Llama ponders the risks of AI and privacy regulation

Lately, you can’t swing a cat without hitting an article or post about AI (especially those posts where someone asked ChatGPT to write a letter/poem/limerick in the style of x). Snooze fest, right?

Nevertheless, AI is here to stay; and it’s here to use our personal data to learn and evolve. And while AI machines don’t care about ethics or our privacy, developers should.

WHAT IS THIS AI YOU SPEAK OF?

According to PWC’s Global AI study, AI is a collective term for computer systems that can sense their environment, think, learn, and take action in response to what they are sensing and what their objectives are. Examples include chatbots, digital assistance and machine learning.

WHAT ARE THE PRIVACY RISKS WHEN USING AI?

According to the IAPP’s Privacy and AI Governance report the top risks for AI and privacy are harmful bias, bad governance and a lack of legal clarity. Most organisations will have risks on their risk registers in the next couple of years like AI and machine learning bias, increased potential fines for privacy infringements, and interoperability of algorithmic outputs.

HOW DO WE PROTECT PRIVACY WHEN USING AI?

Well, AI regulation is coming, as the Harvard Business Review said it would back in 2021. Regulators are abuzz with draft regulations and frameworks, working groups and sandboxes that aim to regulate the use of AI. Regulation is necessary to mitigate the risks of using AI. For instance, using AI increases the potential scale of bias. And bias is often embedded in the data used to train AI – data provided by humans. When Microsoft launched a chatbot using AI to engage with users on Twitter, it had to be shut down in less than 24 hours due to its racist and sexist comments – behaviour learned from human (likely some bots too) Twitter users.

According to an independent high-level expert group on AI set up by the European Commission, the seven requirements for trustworthy AI are:

  • human agency and oversight;
  • technical robustness and safety;
  • privacy and data governance;
  • transparency;
  • diversity, non-discrimination and fairness;
  • societal and environmental well-being; and
  • accountability.

So privacy is on the list, check. But things like security, accountability, transparency and fairness are part of both privacy and AI governance frameworks. Those working in AI governance must collaborate with and learn from their privacy governance counterparts. For instance, an AI impact assessment could be merged or must be coordinated with a privacy impact assessment.

WHERE TO START

We would start with the European Commission that published a super useful self-assessment tool for the trustworthiness of AI. Then, remember to involve your privacy team from the get-go to ensure you follow privacy best practices and regulations when training AI systems such as:

  • doing a privacy impact assessment;notifying data subjects that their data will be used for training AI systems;
  • meeting the requirements for secondary use (further processing) of personal data which often include obtaining consent; and
  • ensuring data quality.

Also check out the NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0) and the European Commission proposal for Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act).

Drop us a DM if you want to chat.
Contact us
LinkedIn

Please Share!