top of page

Corporate World Faces Unprecedented Deepfake Crisis After $25 Million AI Heist

a deepfake

The business world is facing increasing cybersecurity scrutiny after a high-profile, advanced deepfake scam cost a multinational engineering firm Arup about $25 million (HK$200 million). It is an eye opener because, as Hong Kong police confirmed, the incident did not involve breaking into systems, but rather using advanced Artificial Intelligence to masquerade as a company leader and to play upon the trust of an employee.

The latest and perhaps the most alarming development in this new era of cybercrime was a scenario in which finance staff with Arup in its Hong Kong office was duped into approving a huge transfer of funds. The scam started with an email, but the first inkling of arousal was, altogether, surpassed by the employee once he was invited to a supposed video conference. The employee was called by what seemed to be the Chief Financial Officer (CFO) of the company based in the UK and a few other employees.

The case is one of a flood of deepfake-based frauds rocking the world, with specialized AI applications being used to manufacture authentic video and voice impersonations of executives. The crux of the complaint in these new offenses is that the technology has become so available and persuasive that it circumvents conventional human and technical scrutiny controls. The Arup employee acting under the assumption that he or she was carrying out urgent, confidential instructions issued by trusted leadership executed 15 individual wire transfers. All of the faces and voices on the call were identified afterward as an AI-created deepfake.

To make the corporate pressure even more annoying, cybersecurity professionals and law-enforcement agencies are proclaiming that it is the new phase of technology-enhanced social engineering. Criminals are now being accused of intentionally using generative AI tools, which take little technical expertise to operate, to launch highly targeted and deceptive attacks. Rob Greig, Global Chief Information Officer of Arup has publicly admitted that the internal systems of the company were not compromised, which further confirms that the human factor is currently the weakest link in the defense chain.

Corporations and security firms are dashing to enhance their inside security measures in response to the increasing controversy. Experts are now encouraging the use of non-technological measures, like creating a private, pre-arranged code word or safe phrase to use on senior-level financial approval and a mandatory, multi-channel verification rule so that a video request has to be verified by a separate, trusted phone call or face-to-face verification.

Despite such efforts, opponents argue that these efforts are merely reactive and that tech regulators have been distressingly slow in developing legal frameworks around synthetic media. The judicial procedures and the media furor surrounding the Arup heist are instances of the continuing debate about whether AI firms should be tasked with protecting corporate employees and the financial system against malicious usage of their technology. The outputs of such inquiries are expected to present historic precedents on how online platforms and developers will answer to the when and how of the future fraudulent implementation of their AI models.




Want more cybersecurity guides?

Subscribe to our newsletter!


Recent Posts

bottom of page