Community News

Say good-bye to critical thinking; Operation Jigsaw – info intervention initiative

Published

on

Photo Credit: Werner Moser 

BY SIMONE J. SMITH

We are living in a time where critical thinking is no longer encouraged. It is not about searching for the truth or asking questions; instead, information is placed in front of us, and we are forced to accept what we are told without thought.

While censors of media may have good intentions, they often fail to recognize that censorship itself can be a dangerous practice. Alternative media sites provide the opportunity for critical thinking skills that are needed to flourish as mindful citizens. Without being exposed to opposing information representing a range of beliefs and ideas, media consumers lose out on the opportunity to judge ideas that may oppose what they believe or have been taught to believe. These kinds of evaluative skills are useful in a democratic society where we have to deal with controversial ideas in the media, at work, in the government and internationally.

What we are trying to avoid is life, as it is known in places like North Korea and other heavily censored countries. Citizens have no access to information from outside their country and are forbidden to criticize the government. Many people in these censored countries live isolated, sheltered lives, ignorant of alternative ways of life and thinking.

What is scary is this societal make-up is slowly making its way into North America, a continent that is supposed to be known for its democratic governmental system. There are developing interventions that are going to slowly bring us to a place where we are shut out from the rest of the world and forced to ingest what is given to us.

There is a new info intervention initiative led by one of Google’s subsidiaries, a company called Jigsaw. It is under the complete management of Google and its mission is to apply technological solutions, from fighting extremism, online censorship and cyber-attacks to protecting access to information.

Their official website touts this new initiative as a set of approaches, informed by behavioural science research and validated by digital experiments, to build resilience to harm online. This new initiative allows Google to use essentially the same methodology on the user that Pavlov used on dogs in his experiments. Instead of trying to make dogs salivate, Google is trying to get their users to question anything that goes against what their fact-checkers deem to be disinformation.

According to their website Jigsaw.com, as the tactics of disinformation campaigns become more sophisticated, they are building new technology to strengthen their collective defences. Their research included:

Project Assembler

Assembler was an experiment conducted by Jigsaw and Google Research, which aimed to advance how new detection technology could help fact-checkers and journalists identify manipulated media. This experiment is now closed.

Contributing Data to Deepfake Detection Research

So-called “deepfakes“—produced by deep generative models that can manipulate video and audio clips—are one of these. Since their first appearance in late 2017, many open-source deepfake generation methods have emerged, leading to a growing number of synthesized media clips. While many are likely intended to be humorous, others could be harmful to individuals and society.

Interactive Visualizer

Coordinated disinformation campaigns are more likely to thrive when they go unnoticed and unchecked. This interactive visualizer breaks down the methods, targets, and origins of select coordinated disinformation campaigns throughout the world.

Jigsaw has four methods that are supposed to manage the misinformation on the internet. The four methods are: accuracy prompts, redirect method, authorship feedback, pre-bunking. We are going to look at each of these in turn:

Accuracy Prompts

Previous research had shown that most social media users are often fairly adept at spotting falsehoods when asked to judge accuracy. This didn’t always stop them from spreading misinformation because they would simply forget to think about accuracy when deciding what to share. So now there are prompts that could slow the spread of false news by helping people stop and reflect on the accuracy of what they were seeing before they click ‘share.’”

Redirect Method

Redirect Method identifies keywords and patterns of online activity that reveal that a person may be on a path towards extremism.

Authorship Feedback

The idea is simple: they use perspective’s machine learning models to spot potentially toxic contributions and the platform provides a signal to the author right as they’re writing the comment. By suggesting to the user that their comment might violate community guidelines, they have an extra few seconds to consider adjusting their language.

Pre-bunking

This involves debunking a line of disinformation by publishing an account of that disinformation along with a simultaneous refutation before the disinformation itself is actually disseminated by its author. Pre-bunking is an anticipatory form of rumour control.

Let’s keep it real, it’s about controlling what you can say and see online. Things are changing, and it doesn’t look like it will get better.

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version