Cyber Security
 | 3 min read

Over 50% of Gen Z using Generative AI but risk awareness lags behind

By  Kordia,
 10 October 2023

New research suggests over half of employed Gen Z New Zealanders are using generative AI, despite limited understanding of the risks associated with its use.

In an independent survey by Perceptive commissioned by Kordia, 53% of Gen Z respondents said they had used generative AI tools like Chat GPT or DalleE, with around one third saying they’ve used it for work. 

Despite this, only 1 in 5 are aware of cyber security or privacy risks associated with AI use.

Alastair Miller, Principal Consultant at Aura Information Security says despite AI’s potential, there are numerous issues to consider about generative AI tools that many businesses are simply overlooking.

 
“Generative AI can boost productivity, but it can also introduce risks around privacy and information security – especially if your employees are entering sensitive or private information into a public AI, like Chat GPT.”

One of the most important things for leaders to understand was the difference between public and private generative AI tools. 

Miller cites high profile cases, like Samsung, where employees inadvertently leaked source code and internal meeting minutes when using the public Chat GPT tool to summarise information. 

“Once data is entered into a public AI tool, it becomes part of a pool of training data – and there’s no telling who might be able to access it. That’s why businesses need to ensure generative AI is used appropriately, and any sensitive data or personal information is kept away from these tools.”

“Gen Z are already widely using this technology, so older generations of business leaders need to upskill their knowledge and become ‘AI Literate’, to ensure younger members of the workforce understand what acceptable usage is, and the risks if generative AI is used incorrectly.”

Balancing benefits by mitigating negative outcomes

Miller says other results from the survey indicate New Zealanders of all ages are naive to some of the other issues pertaining to the use of generative AI tools.

Only 1 in 5 respondents across all age groups said they were aware of any cyber security or privacy risks associated with AI use.

Also concerning, only 1 in 5 were aware that generative AI can produce biased outputs, and only 1 in 3 were aware that it can produce inaccurate information or be used for misinformation campaigns.
Miller says one of the drawbacks of generative AI was its propensity for hallucinations. 

“Like a clever imaginative child, there are instances where a generative AI model will create content that either contradicts the source or creates factually incorrect outputs under the appearance of fact.”

Already there are documented examples of generative AI hallucinations creating issues for organisations. A US law firm is facing sanctions after using Chat GPT to search for legal precedents supporting a client’s case, which resulted in fabricated material being submitted to the courts. 

Miller says there are data sets that should never be entrusted to public generative AI tools. 
“Financial information or commercially sensitive material should never be exposed to public AI tools like Chat GPT – if it were leaked, you could run the risk of losing your competitive edge, or even breaching market regulations.”

“The same goes for any data belonging to customers, or personal information like health records, credentials or contact details.” 

Private AI tools might be fine to use with sensitive data, but Miller says you should still act with caution. 
“Even a private AI tool should go through a security assessment before you entrust your company data to it – this will ensure that there are protective controls in place, so you can reap any benefits without the repercussions.”

Better governance is the key to using AI

Miller says despite the risks, businesses shouldn’t shy away from AI.

“It’s a great instrument for innovation and productivity, and many of the major software companies are weaving AI into our everyday business technology, for example Microsoft 365 Copilot or Bing search.”

“Like any new technology, rather than blindly adopting it, it’s worth defining the value or outcome you want to achieve – that way you can implement it effectively and safely into your organisation.”

“Every business should look to create an AI policy that outlines how the company should strategically use AI and provide guidelines for employee usage – our research suggests only around 12% of businesses have such policies in place, so there’s room for improvement there.”