ChatGPT is an artificial intelligence application that can be used to increase the volume and effectiveness of cybercrime attacks.

Security Awareness
27 January 2023
chat-gpt-open-ai

A culture of corporate awareness and lifelong learning is becoming increasingly vital.

Once upon a time there were journalists, writers, content and text editors. They were human beings with a particular talent for using the written word, a talent in most cases honed through years of study and training in the art of storytelling and skilfully converting concepts and knowledge into a form accessible to everyone.

Now, with the arrival of ChatGPT, this is a breed set to disappear into the rapidly-spinning technological vortex that, for some years now, has been swallowing up a lot of hard-earned knowledge and craft.

Released on 30 November 2022, this new chatbot, developed by the artificial intelligence research company OpenAI, is a real slap in the face for anyone who believed that no machine could ever replace their hard-earned knowledge and skills.

Instead, since the web launch of this application, which is totally free to use, it seems that this last taboo has also been broken.

ChatGPT has a wide range of functions and is extremely adaptable. It can write various types of text in many different subject areas, styles and languages. It can also generate news summaries, product descriptions or articles, make recommendations, write emails, answer questions, translate languages and solve mathematical equations.

Chat GPT has been trained using an enormous database of information scraped from the Internet and features a conversational interface that allows it to answer questions, reject specific types of request, and even recognise when it has malfunctioned or made a mistake. It writes better essays than the average school or college student and apparently can even explain quantum physics to a six-year-old, write poetry, or create a personalised meal plan.

Admittedly, chatbots are already used in fields such as customer service, but they are very different from ChatGPT as most only offer a small selection of pre-programmed automated replies, whereas ChatGPT can answer a question directly and adapt very accurately to the conversation,just like a human being.

And the most impressive thing is, if it’s not too scary to think about, the more it is used, the more it will refine its intelligence and its conversational skills.

Ten years ago the film Herwas released, in which the actor Joaquin Phoenix played a man who fell in love with the voice of a computer after spending days having conversations with it.

At the time, it seemed that this could become reality one day, but a long way in the future. But here we are, talking to a hard-to-define entity that might look like a PC or phone but which can write – and process thoughts, like a human.

For those who are sceptical, there are real chats with this program online, such as Eugenio Miccoli on the Mepiù YouTube channel, who interviews the Chatbot about itself. It’s really worth listening to.

The app has already been a huge success. Just 5 days after its launch, it had gained 1 million users. It took Twitter 2 years to reach that number.

The usefulness of this type of technology is beyond question, but what are the risks and negative consequences? How much will it benefit online criminals?

Probably a lot, given that the program is free to use and can very easily do everything that a human hacker would do to attract their victims, such as writing phishing emails, creating a different narrative every time.

In other words, even limiting the discussion to ChatGPT’s ability to generate text, its possibilities for cybercriminals are quite extensive and will likely improve rapidly the more it is used.

Take phishing for example. Phishers can start by using ChatGPT and similar platforms to generate very realistic and well-written individual emails. With the rapid availability of open source versions of the technology, those with more advanced skills and access to compromised email accounts will be able to train their AI on messages stolen from a company. With scripting and automation they can create an infinite number of mass-produced, personalised emails and can learn, in real time, what works and what doesn’t and modify their attack accordingly.

Phishing emails produced by ChatGPT are of much higher quality than most emails that hackers send today, which are usually easily recognisable by the grammatical errors they contain.

What’s more, ChatGPT is not limited to English. Apparently it understands about 20 languages, including Russian, Chinese, and Korean. This means that a Chinese hacker could use their native tongue to explain what they need and ask ChatGPT to send the email in English.

Not to mention the immense damage that such a tool could do to companies through attacks of various kinds on their reputation: fake news articles, press releases, customer reviews, blog posts and more can be easily and quickly created. These already exist, of course, but creating high-quality text takes time and money. ChatGPT will allow cyber criminals to produce a variety of messages in a range of different styles, to push any narrative they want.

And it’s not just individual phishing emails that will become indistinguishable from real ones, but entire websites. Fake sites can be used to collect credentials from visitors, spread incorrect information, or provide evidence for a counterfeit identity.

In short, despite the opportunities it undoubtably presents, ChatGPT is opening the door to virtually infinite new ways of carrying out online crime, and we must be prepared.

Being risk-aware can make a big difference.

For this reason, a culture of corporate awareness will be increasingly important, along with continuous staff training.

It is no longer enough to be able to recognise a poorly written email to realise that you are being targeted by a scammer. You have to evolve along with the data pirates, by learning about all the dangers of the cybernetic world and its tricks. Not only that, merely studying the theory of cyber security isn’t enough either. You have to put the theory into practice by continuously applying defensive techniques and keeping abreast of the developments that these techniques must take into account.

ISCRIVITI ALLA NEWSLETTER

Articoli correlati