Community News

“No way! Was that really you?” The terrifying rise of ai deep fake

Published

on

Created with AI by The Deep View

BY SIMONE J. SMITH

“I don’t want you to panic,” your friend tells you, “But there’s an inappropriate video of you circulating the internet.”

At first, you think that it is a sick joke. Then you click on the link. It is a nude video that had been recorded and published without your knowledge, or consent. That single video has spawned hundreds of deepfake iterations — at the height of it, there are more than 830 links containing the material.

“This is really one of the most devastating moments in my entire life.” You don’t know how to react.

There is a key element to the ethics of AI (an ever-exploding field) and a key component of this ever-unfolding AI story — has revolved around deep fakes, that AI-powered capable of creating an image, or video of someone that is both super convincing and completely fake.

With half the world’s population heading to the polls this year, Sumsub, a global full-cycle verification provider, detected upwards of a 245% increase in deep fakes worldwide – as well as a 303% increase in the U.S. The findings show a growing number of deep fakes in certain countries where elections occur in 2024, including the: US, India, Indonesia, Mexico, and South Africa.

Sumsub’s Q1 2024 verification and identity fraud data have provided some key global highlights on deep fakes:

  • Countries with the most deep fakes detected in Q1 2024 are: China, Spain, Germany, Ukraine, the US, Vietnam, and the UK.
  • There’s noticeable growth of deep fake incidents in countries where elections are planned for 2024: India (280%), the US (303%), South Africa (500%), Mexico (500%), Moldova (900%), Indonesia (1550%), and South Korea (1625%).
  • In the EU (where European Parliament elections are set for June), many countries experienced deep fake cases increase this includes Bulgaria (3000%), Portugal (1700%), Belgium (800%), Spain (191%), Germany (142%), and France (97%).
  • Even in countries with no elections in 2024, deep fake scams are advancing at unprecedented rates. This includes China (2800%), Turkey (1533%), Singapore (1100%), Hong Kong (1000%), Brazil (822%), Vietnam (541%), Ukraine (394%) ** and Japan (243%).
  • While AI fraud grew in most places, there were some countries holding elections in 2024 where the number of deep fake incidents decreased. This includes the UK (-10%), Croatia (-33%), Ireland (-40%), and Lithuania (-44%).

There is an aspect of this tech that has already been weaponized in ways that run the gamut from horrifying to disturbing:

  • Non-consensual deep fake: Non-consensual deep fake is a digitally altered, or artificially generated content, typically videos, or images, that depict individuals in scenarios they did not participate in and without their consent. This technology leverages advanced machine learning techniques, particularly deep learning, to superimpose, or graft an individual’s likeness onto someone else’s body, creating realistic, but false representations.
  • Pornographic abuse: Our opening story speaks to pornographic abuse, which involves the: creation, distribution, and consumption of sexually explicit material without the consent of the person depicted. This includes revenge porn, non-consensual pornography, and other forms of sexual exploitation online.
  • Election interference: Deep fakes can create videos of political candidates, or public figures making statements, or taking actions that never happened. These fabricated clips can be used to: damage reputations, influence public opinion, or create confusion among voters. Deep fakes can be used to exploit voters’ emotions by creating content that triggers: fear, anger, or other strong emotions, and can influence voting behavior.
  • Theft: Fraudsters can create deep fake videos of individuals to gain access to secure systems, bank accounts, or personal data. Deep fake audio, or video can be used in spear-phishing attacks, where the attacker pretends to be someone the victim knows and trusts to steal sensitive information.

So, how do you protect yourself? Some quick thoughts are to verify the authenticity of videos and images by checking multiple reputable sources. Do your best to limit the amount of personal information you share online, as it can be used to create convincing deep fakes. If you are aware of any deep fake content, report it to the relevant authorities about the platforms where the content is hosted.

As a media source our number one goal is to educate the community. We want you to share this information about deep fakes and how to detect them with friends, family, and colleagues. We actively advocate and remain a supported educational program that teaches critical thinking and media literacy.

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version