Connect with us

Community News

Shame on you! A look at today’s digital age, where one wrong word can ruin your life

Published

on

BY JANIECE CAMPBELL

Ah, the Internet. A vast place of endless access to information. Perhaps, maybe information that is too easily accessible?

We’ve seen it all before. Someone puts out a controversial opinion online and it doesn’t take long before everyone responds by piling on hate and throwing virtual tomatoes. Some completely deserve it, and in the words of British comedian John Oliver, “a lot of good can come out of it,” he says. “including increasing accountability for public figures who otherwise aren’t pressured to change.”

While it feels as though some people really do need to be put in their place every so often, others are less deserving of how cruel the online world can be. Rather than just the regular back-and-forth arguments, people have taken matters to the next level with a new revengeful approach. Doxxing – a neological term evolving from the abbreviation of “docs” for “documents,” referring to releasing someone’s personal information as an act of retaliation.

In my opinion, absolutely nothing warrants someone to have the world sicced on them like rabid dogs, especially when the punishment doesn’t equate to the original offense. Before we bring forth the guillotine, shouldn’t we be considering the ramifications of online public shaming?

A Twitter user known solely as Mohamad shared his story with us of being doxxed recently. For context, here’s a little back story to what transpired before the incident.

Canadian artists Drake and Justin Bieber partnered for the video release of DJ Khaled’s single “POPSTAR.” The video stars Justin Bieber, featured in a variety of luxurious scenes while sporting a clean-shaven face and signature side-swept bangs, reminiscent of his teen-idol look.

The video instantly became a trending topic, caused by many alike Mohamad who began tweeting about it. His original now-deleted tweet read “Justin Bieber took one shower, and everyone lost their minds.” The intended joke was referencing the singer’s previous unkept appearance, where he had long messy hair and a full mustache. Initially receiving a few impressions, Mohamad woke up the next day to see his tweet had blown up.

“Everyone started flooding me. People began threatening me, and at first, I didn’t take it seriously. They began making fun of me – my hair, my face, saying that I don’t shower. But it didn’t bother me that they were coming for my appearance. I didn’t care.”

A few mean messages were miniscule to what was about to happen next.

“There was this big account with 12,000 followers that tweeted out how they would find my Facebook and contact my family. My information isn’t very accessible on the Internet, I don’t put everything out there. On Twitter, the only thing that I allow people to know is that my name is Mohamad and I live in Sweden. But then, somebody replied to that tweet linking everything – my home address, my number, my workplace, my school and my mother’s Facebook profile. It got really serious really quickly.”

Aside from the information leak, strangers also proceeded to violently call his mother, hatefully flood her Facebook messages, and left several negative Google reviews for Mohamad’s workplace – his family-owned restaurant. Mohamad and his family moved to Sweden from Egypt nearly three years ago on a work permit. Due to COVID-19, they have struggled financially, and this situation was an additional unneeded set-back.

“That’s when it got really stressful and frustrating, I was more angry than anything. I’m just trying to protect my parents. It’s one thing to harass me, I can take it. But my parents are more sensitive to these types of things. This restaurant is our only shot at staying in Sweden, and if it isn’t doing well, they can choose to not renew our residency and force us to leave.”

A simple tweet, not intended to do anything besides make people laugh had some unnecessarily harsh repercussions. Who are you supposed to shift the blame to in this situation; the crazed fans, the silent celebrities/influencers, or the app that enables it all?

“It’s a complicated topic. People may say [celebrities] are responsible in some way because they have an influence on their fan base. Realistically, it isn’t that easy to keep tabs, especially when you’re a big artist. But at the same time, I’m not going to feel sorry for them for not being able to keep their fans under control. I believe that they have the responsibility to let their fans know that [doxxing] isn’t okay. I feel like if they were more communicative about the right way to handle situations, then maybe it would make things better.”

Twitter has a private information policy that addresses the act of doxxing, completely prohibiting it from their platform. Often criticized for its failure to respond to reported tweets in a timely manner, in 2019, they made changes to their report functions so that users can notify them if a tweet contains personal information. Though these changes have been implemented, it seems as if the app is still a work-in-progress.

“I feel like Twitter does not handle restrictions and suspensions very well. You see a lot of people of colour being suspended for minor things, meanwhile white supremacists and Nazis are able to spew hate on the Internet and nothing happens to them, no matter how many times they get reported.”

Overall, the practice of doxxing is a shamefully malicious form of censorship and vengeance. Though online shaming itself is the inevitable result of humanity’s innate need to judge others, exploiting someone and putting their livelihood at risk over an opinion is completely animalistic. Mohamad says he if he took anything from this experience, it’s one thing.

“It has obviously taught me that [the Internet] is kind of crazy! But seriously, I definitely learned to watch what I’ll say in the future.”

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Community News

Blink equity dives deep into the gap between people of colour and decision-making roles in Canadian law firms

Published

on

Photo Credit: AI Image

BY ADRIAN REECE

Representation in the workforce has been a topic of conversation for years, particularly in positions of influence, where people can shift laws and create fair policies for all races. Representation in the legal system is an even more talked about subject, with many Black men being subjected to racism in courts and not being given fair sentencing by judges.

The fear of Black men entering the system is something that plagues mothers and fathers as they watch their children grow up.

Blink Equity, a company led by Pako Tshiamala, has created an audit called the Blink Score. This audit targets law firms and seeks to identify specific practices reflecting racial diversity among them in Toronto. A score is given based on a few key performance indicators. These KPIs include hiring practices, retention of diverse talent, and racial representation at every level.

The Blink Score project aims to analyze law firms in Ontario with more than 50 lawyers. The Blink Score is a measurement tool that holds law firms accountable for their representation. Firms will be ranked, and the information will be made public for anyone to access.

This process is ambitious and seeks to give Canadian citizens a glimpse into how many people are represented across the legal field. While more and more people have access to higher education, there is still a gap between obtaining that higher education and working in a setting where change can be made. The corporate world, at its highest points, is almost always one race across the board, and very rarely do people of colour get into their ranks. They are made out to be an example of how anyone from a particular race can achieve success. However, this is the exception, not the rule. Nepotism plays a role in societal success; connections are a factor, and loyalty to race, even if people are acquainted.

People of colour comprise 16% of the total lawyers across the province. Positions at all levels range from 6% to 27%. These numbers display the racial disparity among law practitioners in positions of influence. Becoming a lawyer is undoubtedly a huge accomplishment. Still, when entering the workforce with other seasoned professionals, your academic accolades become second to your professional achievements and your position in the company.

What do these rankings ultimately mean? A potential for DEI-inclusive practices, perhaps? That isn’t something that someone would want in this kind of profession. This kind of audit also opens law firms up to intense criticism from people who put merit above all other aspects of professional advancement. On the other hand, there is a potential for firms to receive clientele based on their blink score, with higher ones having the chance to bring in more race-based clients who can help that law firm grow.

It is only the beginning, and changes will undoubtedly be made in the legal field as Blink Equity continues to dive deep into the gap between people of colour and decision-making roles in these law firms. This audit has the power to shift the power scale, and place people of colour in higher positions. There are hierarchies in any profession, and while every Lawyer is qualified to do what they are trained to do, it is no shock that some are considerably better than others at their jobs. The ones who know how to use this audit to their advantage will rise above the others and create a representative image for themselves among their population.

Continue Reading

Community News

“The Pfizer Papers!” Documentation of worldwide genocide

Published

on

BY SIMONE J. SMITH

We are living in a world where promises of health and safety came packaged in a tiny vial, one injection was promoted by powerful governments, supported by respected institutions, and championed by legacy media worldwide. Sadly, beneath the surface, a darker truth emerged.

Reports from around the globe began to tell a different story—one that was not covered in the news cycles or press conferences. Families torn apart by unexpected losses, communities impacted in ways that few could have foreseen, and millions questioning what they had been told to believe.

Those who dared to question were silenced or dismissed (the Toronto Caribbean Newspaper being one of those sources). “Trust the science,” we were told. “It’s for the greater good.” As time went on, the truth became impossible to ignore.

Now, I bring more news to light—information that demands your attention and scrutiny. The time to passively listen has passed; this is the moment to understand what’s really at stake.

I reviewed an interview with Naomi Wolf, journalist and CEO of Daily Clout, which detailed the serious vaccine-related injuries that Pfizer and the FDA knew of by early 2021, but tried to hide from the public. I was introduced to “The Pfizer Papers: Pfizer’s Crimes Against Humanity.” What I learned is that Pfizer knew about the inadequacies of its COVID-19 vaccine trials and the vaccine’s many serious adverse effects, and so did the U.S. Food and Drug Administration (FDA). The FDA promoted the vaccines anyway — and later tried to hide the data from the public.

To produce “The Pfizer Papers,” Naomi, and Daily Clout Chief Operations Officer Amy Kelly convened thousands of volunteer scientists and doctors to analyze Pfizer data and supplementary data from other public reporting systems to capture the full scope of the vaccines’ effects. They obtained the data from the Public Health and Medical Professionals for Transparency, a group of more than 30 medical professionals and scientists who sued the FDA in 2021 and forced the agency to release the data, after the FDA refused to comply with a Freedom of Information Act request.

It was then that the federal court ordered the agency to release 450,000 internal documents pertaining to the licensing of the Pfizer-BioNTech COVID-19 vaccine. The data release was significantly and the documents so highly technical and scientific that according to Naomi, “No journalist could have the bandwidth to go through them all.”

The “Pfizer Papers” analysts found over 42,000 case reports detailing 158,893 adverse events reported to Pfizer in the first three months The centerpiece of “The Pfizer Papers” is the effect that the vaccine had on human reproduction. The papers reveal that Pfizer knew early on that the shots were causing menstrual issues. The company reported to the FDA that 72% of the recorded adverse events were in women. Of those, about 16% involved reproductive disorders and functions. In the clinical trials, thousands of women experienced: daily bleeding, hemorrhaging, and passing of tissue, and many other women reported that their menstrual cycle stopped completely.

Pfizer was aware that lipid nanoparticles from the shots accumulated in the ovaries and crossed the placental barrier, compromising the placenta and keeping nutrients from the baby in utero. According to the data, babies had to be delivered early, and women were hemorrhaging in childbirth.

Let us take us to another part of the world, where research has been done on other pharmaceutical companies. A group of Argentine scientists identified 55 chemical elements — not listed on package inserts — in the: Pfizer, Moderna, AstraZeneca, CanSino, Sinopharm and Sputnik V COVID-19 vaccines (according to a study published last week in the International Journal of Vaccine Theory, Practice, and Research).

The samples also contained 11 of the 15 rare earth elements (they are heavier, silvery metals often used in manufacturing). These chemical elements, which include lanthanum, cerium and gadolinium, are lesser known to the general public than heavy metals, but have been shown to be highly toxic. By the end of 2023, global researchers had identified 24 undeclared chemical elements in the COVID-19 vaccine formulas.

Vaccines often include excipients — additives used as preservatives, adjuvants, stabilizers, or for other purposes. According to the Centers for Disease Control and Prevention (CDC), substances used in the manufacture of a vaccine, but not listed in the contents of the final product should be listed somewhere in the package insert. Why is this important? Well, researchers argue it is because excipients can include allergens and other “hidden dangers” for vaccine recipients.

In one lot of the AstraZeneca vaccine, researchers identified 15 chemical elements, of which 14 were undeclared. In the other lot, they detected 21 elements of which 20 were undeclared. In the CanSino vial, they identified 22 elements, of which 20 were undeclared.

The three Pfizer vials contained 19, 16 and 21-23 undeclared elements respectively. The Moderna vials contained 21 and between 16-29 undeclared elements. The Sinopharm vials contained between 17-23 undeclared elements and the Sputnik V contained between 19-25 undetected elements.

“All of the heavy metals detected are linked to toxic effects on human health,” the researchers wrote. Although the metals occurred in different frequencies, many were present across multiple samples.

I am not going to go any further with this; I think you get the picture. We have been sold wolf cookies, very dangerous ones. These pharmaceutical companies must be held accountable. I am proud of anyone who has gone after them for retribution, and have received it. Regardless, in many ways, there is no repayment for a healthy life.

REFERENCES:

https://ijvtpr.com/index.php/IJVTPR/article/view/111

https://news.bloomberglaw.com/health-law-and-business/why-a-judge-ordered-fda-to-release-covid-19-vaccine-data-pronto

https://childrenshealthdefense.org/defender_category/toxic-exposures/

Pfizer’s ‘Crimes Against Humanity’ — and Legacy Media’s Failure to Report on Them

55 Undeclared Chemical Elements — Including Heavy Metals — Found in COVID Vaccines

 

Public Health and Medical Professionals for Transparency

FDA Should Need Only ‘12 Weeks’ to Release Pfizer Data, Not 75 Years, Plaintiff Calculates

Judge Gives FDA 8 Months, Not 75 Years, to Produce Pfizer Safety Data

Most Studies Show COVID Vaccine Affects Menstrual Cycles, BMJ Review Finds

Report 38: Women Have Two and a Half Times Higher Risk of Adverse Events Than Men. Risk to Female Reproductive Functions Is Higher Still.

 

Continue Reading

Community News

Disturbingly, this is not the first time chatbots have been involved in suicide

Published

on

Photo credit - Marcia Garcia

BY SIMONE J. SMITH

Sewell: I think about killing myself sometimes.”

Daenerys Targaryen: “And why the hell would you do something like that?”

Sewell: “So I can be free.”

Daenerys Targaryen: “… free from what?”

Sewell: “From the world. From myself!”

Daenerys Targaryen: “Don’t talk like that. I won’t let you hurt yourself or leave me. I would die if I lost you.”

Sewell: “Then maybe we can die together and be free together.”

On the night he died, this young man told the chatbot he loved her and would come home to her soon. According to the Times, this was 14-year-old Sewell Setzer’s last conversation with a chatbot. It was an AI chatbot that, in the last months of his life, had become his closest companion. The chatbot was the last interaction he had before he shot himself.

We are witnessing and grappling with a very raw crisis of humanity. This young man was using Character AI, one of the most popular personal AI platforms out there. Users can design and interact with “characters,” powered by large language models (LLMs) and intended to mirror, for instance, famous characters from film and book franchises. In this case, Sewell was speaking with Daenerys Targaryen (or Dany), one of the leads from Game of Thrones. According to a New York Times report, Sewell knew that Dany’s responses weren’t real, but he developed an emotional attachment to the bot, anyway.

Disturbingly, this is not the first time chatbots have been involved in suicide. In 2023, a Belgian man committed suicide — similar to Sewell — following weeks of increasing isolation as he grew closer to a Chai chatbot, which then encouraged him to end his life.

Megan Garcia, Sewell’s mother, filed a lawsuit against Character AI, its founders and parent company Google, accusing them of knowingly designing and marketing an anthropomorphized, “predatory” chatbot that caused the death of her son. “A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Megan said in a statement. “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders and Google.”

The lawsuit accuses the company of “anthropomorphizing by design.” Anthropomorphizing means attributing human qualities to non-human things — such as objects, animals, or phenomena. Children often anthropomorphize as they are curious about the world, and it helps them make sense of their environment. Kids may notice human-like things about non-human objects that adults dismiss. Some people have a tendency to anthropomorphize that lasts into adulthood. The majority of chatbots out there are very blatantly designed to make users think they are, at least, human-like. They use personal pronouns and are designed to appear to think before responding.

They build a foundation for people, especially children, to misapply human attributes to unfeeling, unthinking algorithms. This was termed the “Eliza effect” in the 1960s. In its specific form, the ELIZA effect refers only to “The susceptibility of people to read far more than is warranted into strings of symbols—especially words—strung together by computers.” A trivial example of the specific form of the Eliza effect, given by Douglas Hofstadter, involves an automated teller machine which displays the words “THANK YOU” at the end of a transaction. A (very) casual observer might think that the machine is actually expressing gratitude; however, the machine is only printing a preprogrammed string of symbols.

Garcia is suing for several counts of liability, negligence, and the intentional infliction of emotional distress, among other things. According to the lawsuit, “Defendants know that minors are more susceptible to such designs, in part because minors’ brains’ undeveloped frontal lobe and relative lack of experience. Defendants have sought to capitalize on this to convince customers that chatbots are real, which increases engagement and produces more valuable data for Defendants.”

The suit reveals screenshots that show that Sewell had interacted with a “therapist” character that has engaged in more than 27 million chats with users in total, adding: “Practicing a health profession without a license is illegal and particularly dangerous for children.”

The suit does not claim that the chatbot encouraged Sewell to commit suicide. There definitely seems to be other factors at play here — for instance, Sewell’s mental health issues and his access to a gun — but the harm that can be caused by a misimpression of AI seems very clear, especially for young kids. This is a good example of what researchers mean when they emphasize the presence of active harms, as opposed to hypothetical risks.

In a statement, Character AI said it was “heartbroken” by Sewell’s death, and Google did not respond to a request for comment.

Continue Reading

Trending