Triple-I Blog | Deepfake: a real danger

By Maria Sassian, Triple-I advisor

Videos and voice recordings manipulated with unprecedented sophistication – known as “deepfakes” – have multiplied and pose a growing threat to individuals, businesses and national security, such as Triple-I warned back in 2018.

Deepfake creators use machine learning technology to manipulate existing images or recordings to make it look like people are doing and saying things they never did. Deepfakes have the potential to disrupt elections and threaten foreign relations. Already, a suspected deepfake has influenced a coup attempt in Gabon and a failed attempt to discredit the Malaysian economy minister. according to Brookings Institution.

Most deepfakes today are used to humiliate, harass and intimidate women. AN recent research found that up to 95 percent of the thousands of deepfakes on the Internet were pornographic and up to 90 percent of those who used images of women without consent.

Companies can also be damaged by deepfakes. In 2019, a chief executive of a British energy company was tricked into transferring $243,000 to an undisclosed account by what sounded like his boss’s voice on the phone, but was later suspected of being thieves armed with deepfake software.

“The software was able to imitate the voice, and not just the voice: the tonality, the punctuation, the German accent,” said a spokesman for Euler Hermes SA, the insurer of the unnamed energy company. Security firm Symantec said it was aware of several similar cases of CEO voice spoofing, which cost victims millions of dollars.

A plausible — but still hypothetical — scenario involves manipulating executive videos to embarrass them or misrepresent market-moving news.

Insurance coverage another question

Cyber ​​insurance or crime insurance may provide some coverage for damages from deepfakes, but it depends on whether and how those policies are triggered, according to Insurance company. While cyber insurance policies can cover financial loss due to reputational damage as a result of a breach, most policies require network penetration or a cyber attack before a claim is paid. Such a breach is usually not present in a deepfake.

The theft of money by using deepfakes to impersonate a business executive (which happened to the UK energy company) would likely be covered by crime insurance.

Few legal resources

Victims of deepfakes currently have few legal options. Kevin Carroll, Security Expert and Partner in Wiggin and Dana, a Washington DC law firm, said in an email: “The key to quickly proving that an image or especially an audio or video clip is a deepfake is having access to supercomputer time. So you could try to legally ban deepfakes, but it would be very difficult for an ordinary private litigant (unlike the US government) to quickly file a successful lawsuit against the creator of a deepfake unless they can afford it. afford to rent it. kind of computer horsepower and getting expert testimonials.”

An exception could be wealthy celebrities, Carroll said, but they could use existing defamation and intellectual property laws to combat, for example, deepfake pornography that uses their images commercially without the subject’s consent.

A law banning deepfakes outright would run into problems with the First Amendment, Carroll said, because not all of them were created for nefarious purposes. For example, political parodies created using deepfakes are First Amendment-protected speech.

It will be difficult for private companies to protect themselves from the most sophisticated deepfakes, Carroll said, because “the really good ones are likely to be generated by adversaries of the state, which are difficult (although not impossible) to sue and take advantage of.” to recover.”

Existing defamation and intellectual property laws are probably the best remedies, Carroll said.

Potential for insurance fraud

insurers needs to be better prepared to prevent and reduce fraud that deepfakes can help as the industry relies heavily on customers submitting photos and video to self-service claims. Only 39 percent of insurers said they are taking or planning steps to reduce the risk of deepfakes. according to a study by Attestiv.

Business owners and risk managers are advised to read and understand their policies and discuss the terms of their coverage with their insurer, broker or broker.

Leave a Comment

x