Preventing Deepfake at Ferrari: The Manager’s Success Story

Security Awareness
19 August 2024
deepfake-ferrari-truffa

When the “human factor” makes a difference and drives criminals away!

Certainly kudos must be given to the Ferrari manager who foiled an attempt to defraud the Maranello-based company, but, at the end of the day, he did what everyone should do: he was alert, aware, became suspicious, and implemented a very simple ploy that immediately warded off the risk.

It happened on a hot July day.
The manager received some messages on WhatsApp from Ferrari’s CEO Benedetto Vigna alerting him to a supposed large acquisition. However, the messages came from an unknown and unrecognizable number. The motivation for this was the need to maintain the utmost discretion.

“Be ready to sign the non-disclosure agreement that our lawyer will send you as soon as possible. The Italian market regulator and the Milan Stock Exchange have already been informed. Remain ready and please maintain the utmost discretion.”

This was the tenor of the messages that were followed by a phone call in Vigna’s very realistic voice. Even with the ad’s Basilicata accent. But in the sound of the voice the manager noticed some strange metallic sounds, a wake-up call that, together with the unknown number and the different-than-usual profile picture, triggered the saving move: a very simple off-topic and very friendly question:

“Sorry Benedict, what is the title of the book you recommended?”

A cold shower for the criminal who was running the scam using the latest deepfake techniques. So cold that the call immediately stopped and the hacker immediately abandoned his fraud attempt.

The incident, recounted by Bloomberg, underscores on the one hand how much the deepfake tool is increasingly being used by hackers and on the other hand how much scope there is, however, to defend against these kinds of attacks. Both because the techniques used by the malefactors are not yet perfect and therefore with a little attention can be recognized, and because a simple strategy was enough to block an attack that would have created a lot of damage.

We are talking about photos, videos, and audio created thanks to artificial intelligence software that, starting from real content, can modify or recreate the features and movements of a face or body and faithfully imitate a voice. A genre of attacks that is increasingly used and generally succeeds in its criminal intent. Just think of the attack suffered last February by a Hong Kong company scammed with a fake videoconference or the scam by the two Russian comedians Vovan and Lexus who took on the role of African Union Commission President Moussa Faki to phone several European leaders.

The point that is of great concern to experts is that this technology, while still showing weaknesses today, may in the not-too-distant future become increasingly perfected, making it almost impossible to recognize fakes from reality. This is a risk not only for companies but also for private citizens who can be easily scammed if attacked on the “emotion” front. Imagine, for example, parents receiving a call from a child in need of money, or people who are not technologically astute enough receiving phone calls from relatives or friends in need.

A very high risk for everyone and one that can strike across the board, so much so that the the National Security Agency (NSA), the Federal Bureau of Investigation (FBI), and the Cybersecurity and Infrastructure Security Agency (CISA) released a Cybersecurity Information Sheet (CSI), Contextualizing Deepfake Threats to Organizations, which provides an overview of synthetic media threats, techniques, and trends.

Among the essential points listed by the Information Commissioner we read:

  • Avoid spreading uncontrolled images of yourself or your loved ones. In particular, if you post images on social media, remember that they may remain online forever or that, even if you later decide to delete them, someone may have already appropriated them.
  • Although it is not easy, one can learn to recognize a deepfake. There are elements that help: the image may appear pixilated (i.e., somewhat “grainy “blurry); people’s eyes may move unnaturally at times; the mouth may appear distorted or too large while the person is saying certain things; the light and shadows on the face may appear abnormal.
  • If you have any doubt that a video or audio is a deepfake made without your knowledge, you should definitely avoid sharing it (so as not to multiply the harm to people by its uncontrolled spread). And you may perhaps decide to report it as a possible fake to the platform hosting it (e.g., a social media outlet).
  • If you believe that the deepfake has been used in such a way as to commit a crime a violation of privacy, you can turn to law enforcement authorities (e.g., the Postal Police) or the Data Protection Authority, as appropriate.

In general, the recommendation to maintain presence and awareness, to never take actions dictated by impulsiveness, and to never blindly trust anyone, especially when receiving requests for money, always remains valid.
Even if it is our supreme leader writing to whom it is difficult to say no. Better to always check by calling the person concerned and ascertain the authenticity of the request.

These are not difficult behaviors to adopt; it is a matter of developing a focus and sensitivity that can certainly be trained through effective and tailored training on which it is never more important than today to invest time and resources.

It only takes one successful attack to ruin a company on both the financial and reputational fronts.

This latest case of Ferrari proves it: a manager with an appropriate digital posture saved the company from a bad adventure. A confirmation that the human factor, being the weakest link in the chain, remains the one most targeted. Strengthening it means really securing people and companies.

Related Articles