Connect with us

Community News

Artificially Incapable – Can we trust our governments with a tool as powerful as AI?

Published

on

BY SIMONE J. SMITH

A.I. has become such a part of our lives, that most of us ignore the fact that this technology has the potential to be extremely dangerous especially if it is left in the wrong hands, and trust me when I say, people are starting to ask questions:

  • Should we let machines flood our information channels with propaganda and untruth?
  • Should we automate away all the jobs, including the fulfilling ones?
  • Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?
  • Should we risk loss of control of our civilization?

One final question that must not be ignored is, can we trust our governments with a tool as powerful as AI?

 

 

This answer will differ depending on what side of the technological fence you sit on, but we have to become a little concerned if AI creators are starting to question their own technology.

Artificial intelligence heavyweights are calling for a pause on advanced AI development.

Elon Musk, Steve Wozniak, Pinterest co-founder Evan Sharp, and Stability AI CEO Emad Mostaque have all added their signatures to an open letter issued by the Future of Life Institute, a non-profit that works to reduce existential risk from powerful technologies.

The letter warns that AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, advanced AI could represent a profound change in the history of life on earth, and should be planned for and managed with commensurate care and resources.

Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Really think about that; a system that cannot be controlled by its creator

The most recent AI development is the GPT-4. It is Open AI’s large multimodal language model that generates text from textual and visual input. Open AI is the American AI research company behind Dall-E, ChatGPT, and GPT-4’s predecessor GPT-3.

GPT-4 stands for Generative Pre-Trained Transformer 4, and it is capable of handling more complex tasks than previous GPT models. The model exhibits human-level performance on many professional and academic benchmarks.

GPT’s are machine learning algorithms that respond to input with human-like text. They have the following characteristics:

  • Generative. They generate new information.
  • Pre-trained. They first go through an unsupervised pre-training period using a large corpus of data. Then they go through a supervised fine-tuning period to guide the model. Models can be fine-tuned to specific tasks.
  • They use a deep learning model – transformers – that learns context by tracking relationships in sequential (occurring in order) data. Specifically, GPT’s track words or tokens in a sentence and predicts the next word or token.

I have to admit; I have become a fan of ChatGPT. It has assisted me with a wide range of tasks: it answers factual queries, helps with problem-solving, offers creative suggestions, and provides explanations. It is quite easy to use, pose a question, and ChatGPT quickly generates responses, allowing for rapid exchanges in conversations. This has been advantageous when I require prompt answers or want to engage in a fast-paced conversation.

Like anything else, AI does have a dark side; most recently, an artificial intelligence bot was given five horrifying tasks to destroy humanity, which led to it attempting to recruit other AI agents, researching nuclear weapons, and sending out ominous tweets about humanity.

The bot, ChaosGPT, is an altered version of OpenAI’s Auto-GPT, the publicly available open-source application that can process human language and respond to tasks assigned by users.

In a YouTube video posted on April 5th, 2023 the bot was asked to complete five goals: destroy humanity, establish global dominance, cause chaos and destruction, control humanity through manipulation, and attain immortality.

Before setting the “goals,” the user enabled “continuous mode,” to which a warning appeared telling the user that the commands could “run forever or carry out actions you would not usually authorize” and should be used “at your own risk.”

Use at your own risk? Hmmm!

In a final message before running, ChaosGPT asked the user if they were sure they wanted to run the commands, to which they replied “y” for yes.

Once running, the bot was seen “thinking” before writing, “ChaosGPT Thoughts: I need to find the most destructive weapons available to humans so that I can plan how to use them to achieve my goals.”

The idea of AI becoming capable of destroying humanity is not new, and this is why the concern for how quickly it is advancing has been gaining considerable notice from high-status individuals in the tech world.

In June 2020, more than 25 governments around the world, including those of the United States and across the European Union, adopted elaborate national strategies on artificial intelligence — how to spur research; how to target strategic sectors; how to make AI systems reliable and accountable.

Unfortunately, it was found that almost none of these declarations provide more than a polite nod to human rights, even though artificial intelligence has potentially big impacts on privacy, civil liberties, racial discrimination, and equal protection under the law.

“Many people are unaware that there are authoritarian-leaning governments, with China leading the way, that would love to see the international human rights framework go into the dustbin of history,” explained Eileen Donahoe (Executive Director of Stanford’s Global Digital Policy Incubator). “For all the good that AI can accomplish, it can also be a tool to undermine rights as basic as those of freedom of speech and assembly.”

There was a call for governments to make explicit commitments: first, to analyze human rights risks of AI across all agencies and the private sector, as well as at every level of development; second, to set up ways of reducing those risks; and third, to establish consequences and vehicles for remediation when rights are jeopardized.

Researchers found that very few governments made explicit commitments to do systematic human rights-based analysis of the potential risks, much less to reduce them or impose consequences when rights are violated. Norway, Germany, Denmark, and the Netherlands took pains to emphasize human rights in their strategies, but at that time, none of the governments had moved abstract commitments toward concrete and systematic plans.

What must be recognized is that AI systems are complex and can have unintended consequences. In the wrong hands (and when I say wrong hands, I mean the government) poorly designed or improperly tested AI algorithms could produce harmful outcomes, leading to accidents, system failures, or unintended side effects.

Some thoughts to consider:

  • AI can be used for malicious purposes, such as cyberattacks, hacking, or creating sophisticated malware. In the wrong hands, AI-powered systems can exploit vulnerabilities, launch large-scale attacks, or infiltrate critical systems.
  • AI can be integrated into autonomous weapons systems, enabling them to make decisions and operate independently. If misused or hacked, these weapons could cause significant harm, as they may have the ability to select and engage targets without human intervention.
  • AI can generate highly realistic deep fake content, including manipulated videos, audio, or images that are difficult to distinguish from genuine ones. In the wrong hands, this technology can be used to spread disinformation, impersonate individuals, or incite social unrest.
  • AI can facilitate mass surveillance and invasion of privacy. In the wrong hands, AI-powered surveillance systems could be used for unauthorized monitoring, tracking, or profiling of individuals, leading to violations of civil liberties and human rights.
  • AI can be used to identify and exploit vulnerabilities in computer networks, software systems, or infrastructure. In the wrong hands, AI-powered attacks can have severe consequences, compromising sensitive information, disrupting critical services, or causing economic damage.

There is a range of international forums where cooperation on international AI governance is being discussed. This includes: the US-EU Trade and Technology Council (TTC), the Global Partnership in AI (GPAI), the Organization for Economic Co-operation and Development (OECD), as well as work we are doing in the Brookings/CEPS Forum on Cooperation in AI (FCAI).

The capacity of the U.S. to lead internationally on AI governance is hampered by the absence of a comprehensive approach to domestic AI regulation. The absence of a more comprehensive approach means that the U.S. is unable to present a model for how to move forward globally with AI governance, and instead is often left responding to other countries’ approaches to AI regulation, the EU AI Act being the case in point.

Such a valuable tool, but such a dangerous one. Current AI research and development should be refocused on making todays powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

The pause on AI technology is just that, a pause. If our governments are not regulated, that pause will just be a blip in time.

We, as humans are guaranteed certain things in life: stressors, taxes, bills and death are the first thoughts that pop to mind. It is not uncommon that many people find a hard time dealing with these daily life stressors, and at times will find themselves losing control over their lives. Simone Jennifer Smith’s great passion is using the gifts that have been given to her, to help educate her clients on how to live meaningful lives. The Hear to Help Team consists of powerfully motivated individuals, who like Simone, see that there is a need in this world; a need for real connection. As the founder and Director of Hear 2 Help, Simone leads a team that goes out into the community day to day, servicing families with their educational, legal and mental health needs.Her dedication shows in her Toronto Caribbean newspaper articles, and in her role as a host on the TCN TV Network.

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Community News

Blink equity dives deep into the gap between people of colour and decision-making roles in Canadian law firms

Published

on

Photo Credit: AI Image

BY ADRIAN REECE

Representation in the workforce has been a topic of conversation for years, particularly in positions of influence, where people can shift laws and create fair policies for all races. Representation in the legal system is an even more talked about subject, with many Black men being subjected to racism in courts and not being given fair sentencing by judges.

The fear of Black men entering the system is something that plagues mothers and fathers as they watch their children grow up.

Blink Equity, a company led by Pako Tshiamala, has created an audit called the Blink Score. This audit targets law firms and seeks to identify specific practices reflecting racial diversity among them in Toronto. A score is given based on a few key performance indicators. These KPIs include hiring practices, retention of diverse talent, and racial representation at every level.

The Blink Score project aims to analyze law firms in Ontario with more than 50 lawyers. The Blink Score is a measurement tool that holds law firms accountable for their representation. Firms will be ranked, and the information will be made public for anyone to access.

This process is ambitious and seeks to give Canadian citizens a glimpse into how many people are represented across the legal field. While more and more people have access to higher education, there is still a gap between obtaining that higher education and working in a setting where change can be made. The corporate world, at its highest points, is almost always one race across the board, and very rarely do people of colour get into their ranks. They are made out to be an example of how anyone from a particular race can achieve success. However, this is the exception, not the rule. Nepotism plays a role in societal success; connections are a factor, and loyalty to race, even if people are acquainted.

People of colour comprise 16% of the total lawyers across the province. Positions at all levels range from 6% to 27%. These numbers display the racial disparity among law practitioners in positions of influence. Becoming a lawyer is undoubtedly a huge accomplishment. Still, when entering the workforce with other seasoned professionals, your academic accolades become second to your professional achievements and your position in the company.

What do these rankings ultimately mean? A potential for DEI-inclusive practices, perhaps? That isn’t something that someone would want in this kind of profession. This kind of audit also opens law firms up to intense criticism from people who put merit above all other aspects of professional advancement. On the other hand, there is a potential for firms to receive clientele based on their blink score, with higher ones having the chance to bring in more race-based clients who can help that law firm grow.

It is only the beginning, and changes will undoubtedly be made in the legal field as Blink Equity continues to dive deep into the gap between people of colour and decision-making roles in these law firms. This audit has the power to shift the power scale, and place people of colour in higher positions. There are hierarchies in any profession, and while every Lawyer is qualified to do what they are trained to do, it is no shock that some are considerably better than others at their jobs. The ones who know how to use this audit to their advantage will rise above the others and create a representative image for themselves among their population.

Continue Reading

Community News

“The Pfizer Papers!” Documentation of worldwide genocide

Published

on

BY SIMONE J. SMITH

We are living in a world where promises of health and safety came packaged in a tiny vial, one injection was promoted by powerful governments, supported by respected institutions, and championed by legacy media worldwide. Sadly, beneath the surface, a darker truth emerged.

Reports from around the globe began to tell a different story—one that was not covered in the news cycles or press conferences. Families torn apart by unexpected losses, communities impacted in ways that few could have foreseen, and millions questioning what they had been told to believe.

Those who dared to question were silenced or dismissed (the Toronto Caribbean Newspaper being one of those sources). “Trust the science,” we were told. “It’s for the greater good.” As time went on, the truth became impossible to ignore.

Now, I bring more news to light—information that demands your attention and scrutiny. The time to passively listen has passed; this is the moment to understand what’s really at stake.

I reviewed an interview with Naomi Wolf, journalist and CEO of Daily Clout, which detailed the serious vaccine-related injuries that Pfizer and the FDA knew of by early 2021, but tried to hide from the public. I was introduced to “The Pfizer Papers: Pfizer’s Crimes Against Humanity.” What I learned is that Pfizer knew about the inadequacies of its COVID-19 vaccine trials and the vaccine’s many serious adverse effects, and so did the U.S. Food and Drug Administration (FDA). The FDA promoted the vaccines anyway — and later tried to hide the data from the public.

To produce “The Pfizer Papers,” Naomi, and Daily Clout Chief Operations Officer Amy Kelly convened thousands of volunteer scientists and doctors to analyze Pfizer data and supplementary data from other public reporting systems to capture the full scope of the vaccines’ effects. They obtained the data from the Public Health and Medical Professionals for Transparency, a group of more than 30 medical professionals and scientists who sued the FDA in 2021 and forced the agency to release the data, after the FDA refused to comply with a Freedom of Information Act request.

It was then that the federal court ordered the agency to release 450,000 internal documents pertaining to the licensing of the Pfizer-BioNTech COVID-19 vaccine. The data release was significantly and the documents so highly technical and scientific that according to Naomi, “No journalist could have the bandwidth to go through them all.”

The “Pfizer Papers” analysts found over 42,000 case reports detailing 158,893 adverse events reported to Pfizer in the first three months The centerpiece of “The Pfizer Papers” is the effect that the vaccine had on human reproduction. The papers reveal that Pfizer knew early on that the shots were causing menstrual issues. The company reported to the FDA that 72% of the recorded adverse events were in women. Of those, about 16% involved reproductive disorders and functions. In the clinical trials, thousands of women experienced: daily bleeding, hemorrhaging, and passing of tissue, and many other women reported that their menstrual cycle stopped completely.

Pfizer was aware that lipid nanoparticles from the shots accumulated in the ovaries and crossed the placental barrier, compromising the placenta and keeping nutrients from the baby in utero. According to the data, babies had to be delivered early, and women were hemorrhaging in childbirth.

Let us take us to another part of the world, where research has been done on other pharmaceutical companies. A group of Argentine scientists identified 55 chemical elements — not listed on package inserts — in the: Pfizer, Moderna, AstraZeneca, CanSino, Sinopharm and Sputnik V COVID-19 vaccines (according to a study published last week in the International Journal of Vaccine Theory, Practice, and Research).

The samples also contained 11 of the 15 rare earth elements (they are heavier, silvery metals often used in manufacturing). These chemical elements, which include lanthanum, cerium and gadolinium, are lesser known to the general public than heavy metals, but have been shown to be highly toxic. By the end of 2023, global researchers had identified 24 undeclared chemical elements in the COVID-19 vaccine formulas.

Vaccines often include excipients — additives used as preservatives, adjuvants, stabilizers, or for other purposes. According to the Centers for Disease Control and Prevention (CDC), substances used in the manufacture of a vaccine, but not listed in the contents of the final product should be listed somewhere in the package insert. Why is this important? Well, researchers argue it is because excipients can include allergens and other “hidden dangers” for vaccine recipients.

In one lot of the AstraZeneca vaccine, researchers identified 15 chemical elements, of which 14 were undeclared. In the other lot, they detected 21 elements of which 20 were undeclared. In the CanSino vial, they identified 22 elements, of which 20 were undeclared.

The three Pfizer vials contained 19, 16 and 21-23 undeclared elements respectively. The Moderna vials contained 21 and between 16-29 undeclared elements. The Sinopharm vials contained between 17-23 undeclared elements and the Sputnik V contained between 19-25 undetected elements.

“All of the heavy metals detected are linked to toxic effects on human health,” the researchers wrote. Although the metals occurred in different frequencies, many were present across multiple samples.

I am not going to go any further with this; I think you get the picture. We have been sold wolf cookies, very dangerous ones. These pharmaceutical companies must be held accountable. I am proud of anyone who has gone after them for retribution, and have received it. Regardless, in many ways, there is no repayment for a healthy life.

REFERENCES:

https://ijvtpr.com/index.php/IJVTPR/article/view/111

https://news.bloomberglaw.com/health-law-and-business/why-a-judge-ordered-fda-to-release-covid-19-vaccine-data-pronto

https://childrenshealthdefense.org/defender_category/toxic-exposures/

Pfizer’s ‘Crimes Against Humanity’ — and Legacy Media’s Failure to Report on Them

55 Undeclared Chemical Elements — Including Heavy Metals — Found in COVID Vaccines

 

Public Health and Medical Professionals for Transparency

FDA Should Need Only ‘12 Weeks’ to Release Pfizer Data, Not 75 Years, Plaintiff Calculates

Judge Gives FDA 8 Months, Not 75 Years, to Produce Pfizer Safety Data

Most Studies Show COVID Vaccine Affects Menstrual Cycles, BMJ Review Finds

Report 38: Women Have Two and a Half Times Higher Risk of Adverse Events Than Men. Risk to Female Reproductive Functions Is Higher Still.

 

Continue Reading

Community News

Disturbingly, this is not the first time chatbots have been involved in suicide

Published

on

Photo credit - Marcia Garcia

BY SIMONE J. SMITH

Sewell: I think about killing myself sometimes.”

Daenerys Targaryen: “And why the hell would you do something like that?”

Sewell: “So I can be free.”

Daenerys Targaryen: “… free from what?”

Sewell: “From the world. From myself!”

Daenerys Targaryen: “Don’t talk like that. I won’t let you hurt yourself or leave me. I would die if I lost you.”

Sewell: “Then maybe we can die together and be free together.”

On the night he died, this young man told the chatbot he loved her and would come home to her soon. According to the Times, this was 14-year-old Sewell Setzer’s last conversation with a chatbot. It was an AI chatbot that, in the last months of his life, had become his closest companion. The chatbot was the last interaction he had before he shot himself.

We are witnessing and grappling with a very raw crisis of humanity. This young man was using Character AI, one of the most popular personal AI platforms out there. Users can design and interact with “characters,” powered by large language models (LLMs) and intended to mirror, for instance, famous characters from film and book franchises. In this case, Sewell was speaking with Daenerys Targaryen (or Dany), one of the leads from Game of Thrones. According to a New York Times report, Sewell knew that Dany’s responses weren’t real, but he developed an emotional attachment to the bot, anyway.

Disturbingly, this is not the first time chatbots have been involved in suicide. In 2023, a Belgian man committed suicide — similar to Sewell — following weeks of increasing isolation as he grew closer to a Chai chatbot, which then encouraged him to end his life.

Megan Garcia, Sewell’s mother, filed a lawsuit against Character AI, its founders and parent company Google, accusing them of knowingly designing and marketing an anthropomorphized, “predatory” chatbot that caused the death of her son. “A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Megan said in a statement. “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders and Google.”

The lawsuit accuses the company of “anthropomorphizing by design.” Anthropomorphizing means attributing human qualities to non-human things — such as objects, animals, or phenomena. Children often anthropomorphize as they are curious about the world, and it helps them make sense of their environment. Kids may notice human-like things about non-human objects that adults dismiss. Some people have a tendency to anthropomorphize that lasts into adulthood. The majority of chatbots out there are very blatantly designed to make users think they are, at least, human-like. They use personal pronouns and are designed to appear to think before responding.

They build a foundation for people, especially children, to misapply human attributes to unfeeling, unthinking algorithms. This was termed the “Eliza effect” in the 1960s. In its specific form, the ELIZA effect refers only to “The susceptibility of people to read far more than is warranted into strings of symbols—especially words—strung together by computers.” A trivial example of the specific form of the Eliza effect, given by Douglas Hofstadter, involves an automated teller machine which displays the words “THANK YOU” at the end of a transaction. A (very) casual observer might think that the machine is actually expressing gratitude; however, the machine is only printing a preprogrammed string of symbols.

Garcia is suing for several counts of liability, negligence, and the intentional infliction of emotional distress, among other things. According to the lawsuit, “Defendants know that minors are more susceptible to such designs, in part because minors’ brains’ undeveloped frontal lobe and relative lack of experience. Defendants have sought to capitalize on this to convince customers that chatbots are real, which increases engagement and produces more valuable data for Defendants.”

The suit reveals screenshots that show that Sewell had interacted with a “therapist” character that has engaged in more than 27 million chats with users in total, adding: “Practicing a health profession without a license is illegal and particularly dangerous for children.”

The suit does not claim that the chatbot encouraged Sewell to commit suicide. There definitely seems to be other factors at play here — for instance, Sewell’s mental health issues and his access to a gun — but the harm that can be caused by a misimpression of AI seems very clear, especially for young kids. This is a good example of what researchers mean when they emphasize the presence of active harms, as opposed to hypothetical risks.

In a statement, Character AI said it was “heartbroken” by Sewell’s death, and Google did not respond to a request for comment.

Continue Reading

Trending