Community News

The dangers and genius of deepfakes: A modern digital threat

Published

on

BY SIMONE J. SMITH

“Hi Auntie! I am in trouble and need your help. Can you send $2,000 right away? It’s an emergency. Just wire it to the account I’ve attached to this message. I’ll explain later.”

You look at your phone confused; why wouldn’t she just call me? This doesn’t seem like her. Your phone rings, jolting you out of your confused thoughts.

“Hey Auntie, what’s up?” It is your niece. Perfect! Now is the chance to ask her if everything is okay. “Are you okay? I just got a video from you asking for money. It didn’t sound right.”

“No!” Your niece says emphatically. “I did not send any video. That’s a scam! Someone must have created a fake video of me. It’s called a deepfake. Yeah, they can make it look and sound like me, but it’s not real. Always double-check if something seems off, and don’t send any money unless you talk to me first!

This is not the first time that I have shared this information with you. I realize that sometimes we must hear, read, or see information more than once for it to take into our thoughts. Deepfakes are one of those offshoots of generative AI technology that were fun and rather innocuous in the beginning, but it didn’t take long for them to turn malicious.

Deepfake tech is nothing new; it’s been on display in different environments for years, helping, at first, to bring back or de-age actors in movies. Then, in 2017, the same tech was leveraged to create fake pornography of celebrities, an issue that has since gotten worse.

Back then, it took people — good, or bad actors — hours, and sometimes days, to produce deepfakes that, even then, looked very obviously synthetic. Deepfakes aren’t just slapping one person’s face onto another’s body. The AI behind them carefully analyzes facial movements, expressions, and even how light and shadow interact with the face. This ensures that when the face moves, the angles and details remain consistent, making the fake almost impossible to detect without expert analysis.

Voice deepfakes take it a step further by mimicking the: pitch, accent, and cadence of a person’s speech. It’s not just about getting the sound right; it’s about replicating a person’s unique speech patterns, pauses, and even emotional intonations. This level of precision makes it extremely difficult to differentiate a fake from a real recording.

Deepfakes also play on our trust in video and audio as unshakeable evidence. For decades, if something was captured on video, it was regarded as proof. This psychological reliance on visual and auditory cues is exactly what deepfakes exploit. Even when we know deepfakes exist, it can be hard to shake the instinct to believe our eyes and ears.

The implications of this are as real as they are obvious — the past year has been marked by reports of AI-generated: deepfake images, videos, phone calls and Zoom calls that have served a variety of nefarious purposes, from targeted sexual harassment to electoral misinformation, fraud, identity theft and thievery. Earlier this year, scammers stole $25 million from a company after appearing as a lineup of the company’s leadership in a Zoom call, asking a real employee to move money to a different account.

This is the problem Datambit is aiming to solve. Datambit, a nine-month-old British startup, has developed an AI-powered model called Genui that’s designed specifically for deepfake detection. The system employs machine learning algorithms to detect anomalies in: video, audio and visual content, identifying certain patterns that could indicate the presence of a deepfake, or otherwise manipulated content.

It includes: facial recognition algorithms, audio analysis, voice biometrics and audio forensics. Genui can analyze audio spectrograms (they show how the energy, or intensity of different frequencies in an audio signal changes as time progresses) to identify elements suggestive of a deepfake. Working with specific people, the model can use vocal biometrics to verify the identity of a speaker based on previous vocal samples.

Hari Natarajan, a member of Datambit’s advisory board, shared that right now, the company is in an early stage of beta testing; at the moment, the system is a bit closed-off, requiring early test adopters to either bulk-upload material through its API, or directly upload material into its detection engine. He said that “At the end of the day, this solution can go to pretty much anybody that’s out there.”

Datambit plans to focus on the financial services industry first, “because that looks like an early pain point.”

Tools of detection are great for dealing with deepfakes, but as digital consumers, we have to remain educated when it comes to technological advances.

REFERENCES:

https://www.thestreet.com/technology/how-one-tech-company-is-tackling-recent-proliferation-deepfake-fraud?_bhlid=71f0f882e0d88a154d35a6d55e9ec6779d5bcbeb

https://datambit.com/?_bhlid=157abe4dd3085193b9d1b73a4a8c712827afcca9#insights

https://www.thestreet.com/technology/cybersecurity-expert-says-the-next-generation-of-identity-theft-is-here-identity-hijacking?_bhlid=46b4515af898c1ec8a934c3f471e445ebeee0b2b

https://www.thestreet.com/technology/how-one-tech-company-is-tackling-recent-proliferation-deepfake-fraud?_bhlid=71f0f882e0d88a154d35a6d55e9ec6779d5bcbeb

https://www.thestreet.com/technology/deepfake-porn-ai-taylor-swift-social-media-lawyer-experts?_bhlid=f36a65946aca5c2e35aa67fc9f6f30ac2a61cdaf

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version