Community News

Artificially Incapable – Can we trust our governments with a tool as powerful as AI?

Published

on

BY SIMONE J. SMITH

A.I. has become such a part of our lives, that most of us ignore the fact that this technology has the potential to be extremely dangerous especially if it is left in the wrong hands, and trust me when I say, people are starting to ask questions:

  • Should we let machines flood our information channels with propaganda and untruth?
  • Should we automate away all the jobs, including the fulfilling ones?
  • Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?
  • Should we risk loss of control of our civilization?

One final question that must not be ignored is, can we trust our governments with a tool as powerful as AI?

 

 

This answer will differ depending on what side of the technological fence you sit on, but we have to become a little concerned if AI creators are starting to question their own technology.

Artificial intelligence heavyweights are calling for a pause on advanced AI development.

Elon Musk, Steve Wozniak, Pinterest co-founder Evan Sharp, and Stability AI CEO Emad Mostaque have all added their signatures to an open letter issued by the Future of Life Institute, a non-profit that works to reduce existential risk from powerful technologies.

The letter warns that AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, advanced AI could represent a profound change in the history of life on earth, and should be planned for and managed with commensurate care and resources.

Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Really think about that; a system that cannot be controlled by its creator

The most recent AI development is the GPT-4. It is Open AI’s large multimodal language model that generates text from textual and visual input. Open AI is the American AI research company behind Dall-E, ChatGPT, and GPT-4’s predecessor GPT-3.

GPT-4 stands for Generative Pre-Trained Transformer 4, and it is capable of handling more complex tasks than previous GPT models. The model exhibits human-level performance on many professional and academic benchmarks.

GPT’s are machine learning algorithms that respond to input with human-like text. They have the following characteristics:

  • Generative. They generate new information.
  • Pre-trained. They first go through an unsupervised pre-training period using a large corpus of data. Then they go through a supervised fine-tuning period to guide the model. Models can be fine-tuned to specific tasks.
  • They use a deep learning model – transformers – that learns context by tracking relationships in sequential (occurring in order) data. Specifically, GPT’s track words or tokens in a sentence and predicts the next word or token.

I have to admit; I have become a fan of ChatGPT. It has assisted me with a wide range of tasks: it answers factual queries, helps with problem-solving, offers creative suggestions, and provides explanations. It is quite easy to use, pose a question, and ChatGPT quickly generates responses, allowing for rapid exchanges in conversations. This has been advantageous when I require prompt answers or want to engage in a fast-paced conversation.

Like anything else, AI does have a dark side; most recently, an artificial intelligence bot was given five horrifying tasks to destroy humanity, which led to it attempting to recruit other AI agents, researching nuclear weapons, and sending out ominous tweets about humanity.

The bot, ChaosGPT, is an altered version of OpenAI’s Auto-GPT, the publicly available open-source application that can process human language and respond to tasks assigned by users.

In a YouTube video posted on April 5th, 2023 the bot was asked to complete five goals: destroy humanity, establish global dominance, cause chaos and destruction, control humanity through manipulation, and attain immortality.

Before setting the “goals,” the user enabled “continuous mode,” to which a warning appeared telling the user that the commands could “run forever or carry out actions you would not usually authorize” and should be used “at your own risk.”

Use at your own risk? Hmmm!

In a final message before running, ChaosGPT asked the user if they were sure they wanted to run the commands, to which they replied “y” for yes.

Once running, the bot was seen “thinking” before writing, “ChaosGPT Thoughts: I need to find the most destructive weapons available to humans so that I can plan how to use them to achieve my goals.”

The idea of AI becoming capable of destroying humanity is not new, and this is why the concern for how quickly it is advancing has been gaining considerable notice from high-status individuals in the tech world.

In June 2020, more than 25 governments around the world, including those of the United States and across the European Union, adopted elaborate national strategies on artificial intelligence — how to spur research; how to target strategic sectors; how to make AI systems reliable and accountable.

Unfortunately, it was found that almost none of these declarations provide more than a polite nod to human rights, even though artificial intelligence has potentially big impacts on privacy, civil liberties, racial discrimination, and equal protection under the law.

“Many people are unaware that there are authoritarian-leaning governments, with China leading the way, that would love to see the international human rights framework go into the dustbin of history,” explained Eileen Donahoe (Executive Director of Stanford’s Global Digital Policy Incubator). “For all the good that AI can accomplish, it can also be a tool to undermine rights as basic as those of freedom of speech and assembly.”

There was a call for governments to make explicit commitments: first, to analyze human rights risks of AI across all agencies and the private sector, as well as at every level of development; second, to set up ways of reducing those risks; and third, to establish consequences and vehicles for remediation when rights are jeopardized.

Researchers found that very few governments made explicit commitments to do systematic human rights-based analysis of the potential risks, much less to reduce them or impose consequences when rights are violated. Norway, Germany, Denmark, and the Netherlands took pains to emphasize human rights in their strategies, but at that time, none of the governments had moved abstract commitments toward concrete and systematic plans.

What must be recognized is that AI systems are complex and can have unintended consequences. In the wrong hands (and when I say wrong hands, I mean the government) poorly designed or improperly tested AI algorithms could produce harmful outcomes, leading to accidents, system failures, or unintended side effects.

Some thoughts to consider:

  • AI can be used for malicious purposes, such as cyberattacks, hacking, or creating sophisticated malware. In the wrong hands, AI-powered systems can exploit vulnerabilities, launch large-scale attacks, or infiltrate critical systems.
  • AI can be integrated into autonomous weapons systems, enabling them to make decisions and operate independently. If misused or hacked, these weapons could cause significant harm, as they may have the ability to select and engage targets without human intervention.
  • AI can generate highly realistic deep fake content, including manipulated videos, audio, or images that are difficult to distinguish from genuine ones. In the wrong hands, this technology can be used to spread disinformation, impersonate individuals, or incite social unrest.
  • AI can facilitate mass surveillance and invasion of privacy. In the wrong hands, AI-powered surveillance systems could be used for unauthorized monitoring, tracking, or profiling of individuals, leading to violations of civil liberties and human rights.
  • AI can be used to identify and exploit vulnerabilities in computer networks, software systems, or infrastructure. In the wrong hands, AI-powered attacks can have severe consequences, compromising sensitive information, disrupting critical services, or causing economic damage.

There is a range of international forums where cooperation on international AI governance is being discussed. This includes: the US-EU Trade and Technology Council (TTC), the Global Partnership in AI (GPAI), the Organization for Economic Co-operation and Development (OECD), as well as work we are doing in the Brookings/CEPS Forum on Cooperation in AI (FCAI).

The capacity of the U.S. to lead internationally on AI governance is hampered by the absence of a comprehensive approach to domestic AI regulation. The absence of a more comprehensive approach means that the U.S. is unable to present a model for how to move forward globally with AI governance, and instead is often left responding to other countries’ approaches to AI regulation, the EU AI Act being the case in point.

Such a valuable tool, but such a dangerous one. Current AI research and development should be refocused on making todays powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

The pause on AI technology is just that, a pause. If our governments are not regulated, that pause will just be a blip in time.

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version