Community News

From killer drones to AI that threatens humanity’s future, we as a community need to be aware of what is coming

Published

on

Photo Credit: Reneé Thompson

BY SIMONE J. SMITH

“You can control the algorithm. Let the algorithm work for you.” Kareem Perez

2023 was a game-changing year for artificial intelligence, and it was only the beginning. 2024 is set to usher in a host of scary advancements that may include artificial general intelligence and even more realistic deep fakes. As of late, my interest has turned to technology, because let’s be real; it is the future. From killer drones to AI that threatens humanity’s future, we as a community need to be aware of what is coming, so let’s get into it. With the help of Digital Marketing Thought Leader Kareem Perez, we are going to break down some of the scariest AI breakthroughs likely to come in 2024.

Q* — the age of Artificial General Intelligence (AGI)?

Amid corporate chaos at OpenAI, rumors have been swirling about an advanced technology that could threaten the future of humanity. That OpenAI system, called Q* (pronounced Q-star) may embody the potentially groundbreaking realization of artificial general intelligence (AGI). AGI is a hypothetical tipping point, also known as the “Singularity,” in which AI becomes smarter than humans. Yep! We are talking about I-Robot (predictive programming at its best). Current generations of AI still lag in areas in which humans excel, such as context-based reasoning and genuine creativity.

You see, AGI could potentially perform particular jobs better than most people, scientists have said. Reports are that it could also be weaponized and used, for example, to create enhanced pathogens, launch massive cyber attacks, or orchestrate mass manipulation.

The idea of AGI has long been confined to science fiction, and many scientists believe we’ll never reach this point, but it is not beyond a realm of possibility, and this thought alone terrifies me, as it should you.

Election-Rigging Hyper Realistic Deep Fakes

One of the most pressing cyber threats is that of deep fakes — entirely fabricated images, or videos of people that might: misrepresent them, incriminate them, or bully them. AI deep fake technology isn’t good enough to be a significant threat, but that might be about to change.

AI can now generate real-time deep fakes — live video feeds, in other words — and it is now becoming so good at generating human faces that people can no longer tell the difference between what’s real or fake. A study, published in the journal Psychological Science (November 13th, 2023), unearthed the phenomenon of “hyperrealism,” in which AI-generated content is more likely to be perceived as “real” than actually real content.

I warned my parents about this, because our elderly population is being victimized. Many people may not be aware of the existence and capabilities of deep fake technology. Lack of technical knowledge can make individuals more vulnerable to falling for deep fake content.Some deep fakes incorporate voice cloning technology to mimic the voices of specific individuals (like your niece, nephew, grandson, or granddaughter). This can make it seem like the manipulated content is authentic, especially in audio or video messages.

As AI matures, one scary possibility is that people could deploy deep fakes to attempt to swing elections. The Financial Times (FT) reported, for example, that Bangladesh is bracing itself for an election in January that could be disrupted by deep fakes. The U.S. is gearing up for a presidential election in November 2024, and there is a possibility that AI and deep fakes could shift the outcome of this critical vote.

Mainstream AI-powered Killer Robots

Governments around the world are increasingly incorporating AI into tools for warfare. The U.S. government announced on November 22nd, 2023 that 47 states had endorsed a declaration on the responsible use of AI in the military. Why was such a declaration needed? Well “irresponsible” use is a real and terrifying prospect.

In 2024, it’s likely we’ll not only see AI used in weapons systems, but also in logistics and decision support systems, as well as research and development. In 2022, AI generated 40,000 novel, hypothetical chemical weapons. Various branches of the U.S. military have ordered drones that can perform target recognition and battle tracking better than humans.

One of the most feared development areas is that of lethal autonomous weapon systems (LAWS), or killer robots. Several leading scientists and technologists have warned against killer robots, Stephen Hawking in 2015, and Elon Musk in 2017, but the technology hasn’t yet evolved.

I touched base with Kareem Perez, Executive Director at The TechEffect, and he weighed in on what our community needs to prepare for in 2024

“We need  to be aware that content online needs to be validated. People can create video messages, and it is not them,” Kareem began. “It can be used to spread propaganda, or trick people (elections, schemes). Deep fakes can be designed to evoke specific emotions by altering facial expressions and body language. This emotional manipulation can make people more susceptible to believing and sharing the content.

My biggest worry is deep fake content. It can spread quickly on social media platforms, where users may share and engage with content without thoroughly verifying its authenticity. The viral nature of social media can contribute to the rapid spread of deep fakes.

Education and conversation is always important. We need to adopt technology and understand it. Technology has been around for a while, it is not new. Technology is rapidly expanding; we can’t be afraid of it, it is just evolving, so we have to evolve with it.”

It’s important for individuals to be aware of the existence of deep fake technology, stay vigilant, and employ critical thinking skills when consuming media content. Fact-checking, verifying sources, and being cautious about sharing sensitive information are essential practices to mitigate the future.

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version