Cyber Security
 | 5 min read

There’s still no such thing as a free lunch: A quick look at some AI risks

By  Alastair Miller,
 3 August 2023

Artificial Intelligence is more popular than ever. Those jumping on the AI bandwagon should first make themselves aware of the many risks associated with the likes of ChatGPT, Google’s Bard, and other AI platforms. The risks are professional as well as personal and range from potential security problems to shooting yourself in the foot.

While there are some existential level risks worrying the likes of Elon Musk and others, who believe it could cause societal breakdown and even ‘civilization destruction’, the risks you and I face in our daily lives are somewhat more prosaic and much less dramatic.

While acknowledging these tools can be very useful, their capabilities are often limited. I like to think of them as being a bit like dealing with very intelligent children. Ask a question of that child and you’ll get a response that might include any mix of fact, conjecture, and fantasy. It is much the same with AI. You wouldn’t rely on the kid for your latest work assignment, so I’d advise a similarly cautious approach with AI.

THE SECURITY AND TRUST PROBLEM

Let’s look at the first and biggest elephant in the room: security. Anything you put into ChatGPT becomes the property of OpenAI, the creators of ChatGPT and potentially the public domain. This alone means it’s worth using caution when using it, and other large language models. Bear in mind, ChatGPT itself has already been hacked, so anything anyone’s uploaded into the platform could be out there, circulating among inquisitive users or black hat groups.

The next risk is that if you use the output of AI for work purposes but feel the need to conceal its role in your output, then you probably shouldn’t be doing it.

What we’ve described above scratches the surface of the emerging ethical concerns with AI. Those concerns extend to the risks associated with plagiarism, and the possibility of an AI being just plain wrong.

You can test for plagiarism yourself: find your favourite bit of poetry (or song) and ask your preferred tool to write a piece using the major themes from that poem. Chances are you’ll get several close copies, which will immediately be recognisable as shallow facsimiles of the original. I did this with Robert Frost’s ‘Stopping by the Woods on a Snowy Evening’, asking for ‘a poem about a man and his horse near a forest as the sun sets in the snow’. Three times, superficially ‘good’, but obviously plagiarised versions emerged.

The same goes for being wrong. We’ve all read about the attorney citing non-existent case law courtesy of a wayward chat tool. Now, imagine an engineer getting it to handle a few of the calculations for a major project. Or a security solutions architect relying on an automated scripting tool for a crucial bit of code. As the engineers say, if you design a bridge that comes down, you’d better be under it. The crushing weight of being caught using AI to cheat might be enough to finish the rest of us.

PUT IT TO THE TEST

Testing for accuracy in AI is simple: take a subject in which you are well versed and ask it for an essay or presentation on that topic. Then give it a careful read. You’ll probably find the quality of the output varies greatly. By the same token, ask it for output on a subject you know nothing about, and then appreciate that what sounds highly plausible to you as a layman, might be complete gibberish to the expert. What has made this even harder to benchmark is that each new version of the tool released offers different levels in skills. A month ago it may have been writing some excellent code, but the latest version could now be terrible. Having to doublecheck its capabilities after each upgrade for multiple different areas makes trusting its outputs a time consuming task.

These tests quickly show up the risks as well as the limitations of emerging AI tools. Remember they are just sophisticated software, but software nonetheless. AI is programmed to generate outputs, and they draw on existing knowledge and data sources. This often means they are out of date. It also means the quality of the output is dependent on the quality of the input.

Epistemology – the study of knowledge – has long struggled with issues of truth and veracity. In many cases where, for example, society, culture, and politics is concerned, knowledge or truth aren’t clear cut and objective. ‘Information’ sources are better described as ‘data’ sources, because what is information to someone, is disinformation to another.

The broad theme emerging from the use of these tools, then, is the output is more valuable or more plausible, to those who don’t know better. That’s a real risk and a real danger for those tempted to shortcut things using AI.

This doesn’t mean these tools aren’t useful and don’t have their place. Far from it - we encourage the use and experimentation at work, but nearly everyone agrees the answers aren’t yet helpful enough when working in a specialised field. I’d recommend that anyone should look, play, and see what these tools can do. But caution should absolutely be the watchword.

Resource - AI Checklist-min

AI USAGE POLICY CHECKLIST 

DOWNLOAD NOW

Resource - AI Terms Explained-min

AI TERMS EXPLAINED

DOWNLOAD NOW