Security
 | 5 min read

Is AI eroding our right to privacy?

By  Kordia,
 8 May 2023

Unless you’ve been living under a rock, you can’t have missed the explosion of user generated AI into the workplace.

The uptake has been rapid – Chat GPT alone has amassed over 100 million monthly active users, making it the fastest-growing consumer application in history. 

Kiwi workers have been jumping all in on these new tools for everything from writing code and drafting emails, to causal experimentation out of curiosity and entertainment. 

But with any new technology, there’s always a potential trade-off – and already some regulatory bodies and concerned privacy experts are questioning whether how this usage case of AI is harming the privacy of individuals – and to what extent. 

In April, Italy’s privacy watch dog banned Chat GPT, citing concerns about a recent data breach and questions around the legal basis for using personal data to train the chatbot. While the measure was temporary, it was shortly followed by an announcement by the body that unites Europe's national privacy watchdogs saying it had set up a task force on ChatGPT – potentially an important first step toward a common European policy on setting privacy rules on artificial intelligence.

“AI is trained on massive data sets. There are privacy implications on what data that is, where it came from, how accurate it is and whether consent was given for the data use,” says Alastair Miller, Principal Consultant at Aura Information Security. 

“Then there is the intrinsic bias that AI can have that impacts how it sees any PII (Personal Identifiable Information) data it may be working with. AI tends to hold data and rarely cleanse it, so it breaks the IPP principle of getting rid of any data you don’t need and keeping it accurate.”

“Another issue is that many anonymisation techniques can be broken with wide enough data sets. AI is therefore an excellent tool for deanonymising data.”

Part of problem, says Miller, is that protecting privacy doesn’t seem to be high on the radar for those developing some of these AI tools. 

“The thrust for developing AI has largely been to solve a problem – so security and data privacy typically haven’t been front and centre during the development stage.”

“This is made worse by the fact that most neural networks are a black box - we don’t actually know how they work, so can’t be one hundred percent confident that they are doing things they shouldn’t.”

Already we’ve seen some evidence of the type of privacy and data breaches that can be linked to this type of AI. OpenAi confirmed in March that a bug revealed ChatGPT users’ chat history, personal and billing data. A few weeks later, Samsung banned the use of ChatGPT after employee unwittingly exposed sensitive information via the chatbot. Workers had shared source code with ChatGPT to check for errors and used it to summarise meeting notes, prompting the company to instruct its staff not to enter any personal or company related information into AI services.

Both cases highlight the privacy problem that can easily happen if parties are sharing confidential and personal information into AI applications. One can imagine the severity of complications if rather than company secrets, as in the Samsung case, it was medical data or customer financial information that was being entered and leaked into the AI. 
In the case of Chat GPT, information shared with the AI chatbot is stored on OpenAI's servers and can be used to improve the model unless users opt out.

“If your data has been added into one of these platforms, sadly there’s not a lot you can do – you’re now dependant on the businesses you entrusted your data with to do the right thing and ensure it’s being used responsibly or remove it. In a way, we’re at the mercy of the developers and businesses using these tools, because our privacy depends on how well they are incorporating data privacy and information security principals into the development and usage of these tools.”

Can businesses balance AI and privacy obligations?

There’s no doubt that AI has enormous potential to create efficiencies in business – through the automation of tasks, improvement of decision-making, and by providing fast and comprehensive insights into an organisation’s operations and data.

Miller says the key for businesses leveraging the benefits of AI is to ensure they are doing so ethically and responsibly, with some strong guidelines in place.

“AI is the way forward, but we need to ensure we address the various ethical issues alongside the benefits. Unfortunately, in technology ethics tends to lag way behind the usage.”

An acceptable usage policy on using AI is a good place to start, that defines how employees can use these tools in a secure and responsible manner that protects confidentiality.

“Businesses should definitely have an AI usage policy in place, that outlines what is acceptable when using tools like Chat GPT.”

This could include things like what devices and data you should avoid using for AI, and 
Miller adds that businesses building AI capabilities into their technology or developing their own AI tools also need some sort of policy and guidelines. 

“It’s better to be proactive. Lay out an ethical framework for the building and usage of AI before you rush to use them, otherwise you run the risk of all sorts of privacy abuses, whether intentional or accidental,” adds Miller.

Resource - AI Checklist-min

AI USAGE POLICY CHECKLIST 

DOWNLOAD NOW

Resource - AI Terms Explained-min

AI TERMS EXPLAINED

DOWNLOAD NOW