Connect with us

Community News

Artificial Intelligence to help avert blindness

Published

on

BY W. GIFFORD- JONES MD & DIANA GIFFORD-JONES

How can doctors diagnose and treat 425 million worldwide diabetes patients? That number keeps going up and up, projected to reach 700 million by 2045. There are millions more with undiagnosed prediabetes. Add millions with undiagnosed hypertension. All these people are destined to lives defined by cardiovascular problems and complications that include debilitating conditions like blindness.

Diabetes is swamping healthcare systems worldwide. Let us be clear: whatever we have been doing to fight the problem, it is not working.

Artificial Intelligence (AI) is offering new possibilities. Using new technologies, data science, vast quantities of medical images, and computer algorithms, it is possible to fight diseases differently. The medical model of a patient and a doctor is outdated. We need to put AI on our healthcare team and use analytical methods to predict problems before they occur and to help doctors and patients make better decisions.

Computer-assisted retinal analysis (CARA) is one such technology.  Developed by DIAGNOS, a Montreal-based company, CARA uses retina scans to detect early warning signs of big health problems. And CARA can do it on a scale that will make a big difference in fighting the diabetes epidemics.

The retina, the back part of the eye, is the only area of the body where doctors can easily see the condition of arteries and veins without invasive procedures.  Early detection of atherosclerosis (hardening of arteries) in the retinas of diabetes patients signals a warning that the same problem is occurring in coronary arteries. This is why the retina is called, “the window to the heart”.

Prevention is always better than cure. But this is easier said than done in many parts of the world where highly trained retinal specialists are in short supply. We are more fortunate in North America, but retinal checkups are mainly the purview of ophthalmologists focused on your eyes, not your cardiovascular system.

Type 2 diabetes has become a worldwide epidemic and an expensive problem for every health care system. Type 2 diabetes is not just a singular disease. Rather, by triggering atherosclerosis, it decreases blood supply to many parts of the body with catastrophic results. For example, long standing diabetes increases the risk of blindness, heart attack, and kidney failure, which may require renal dialysis or a kidney transplant.

Doctors can only treat so many patients. So this problem is an example of where we can leverage technology to screen millions of people. CARA can scan an eye in two seconds. Furthermore, it can scan hundreds of patients for hours without getting tired or making errors. We need to use AI to detect retina changes and prevent diabetes – averting countless cases of blindness and other problems, improving lives, and saving dollars.

Andre Larente, president of DIAGNOS, recently remarked, “CARA can now look at a patient’s retina, discover the presence of hypertension and predict a chance of stroke in 12 to 24 months.” Given that CARA can do this across very large populations of patients, at low cost, it’s easy to see the appeal of this technology from a health care and economic perspective, not to mention the incentive to individual patients to reduce their risk profile.

There’s no doubt that the capacities of artificial intelligence are changing the way we can fight illness, and companies like DIAGNOS are important partners in medical practice. The key is in scaling up. CARA has accumulated a vast database of retinal photos of patients worldwide.  This data can be used for predictive modeling.  So the next step will be in getting this data into the hands of those who can take steps to stop the progression of illness, change conditions leading to disease, and prevent these avoidable health problems in the first place.

Dr. W. Gifford-Jones, MD is a graduate of the University of Toronto and the Harvard Medical School. He trained in general surgery at Strong Memorial Hospital, University of Rochester, Montreal General Hospital, McGill University and in Gynecology at Harvard. His storied medical career began as a general practitioner, ship’s surgeon, and hotel doctor. For more than 40 years, he specialized in gynecology, devoting his practice to the formative issues of women’s health. In 1975, he launched his weekly medical column that has been published by national and local Canadian and U.S. newspapers. Today, the readership remains over seven million. His advice contains a solid dose of common sense and he never sits on the fence with controversial issues. He is the author of nine books including, “The Healthy Barmaid”, his autobiography “You’re Going To Do What?”, “What I Learned as a Medical Journalist”, and “90+ How I Got There!” Many years ago, he was successful in a fight to legalize heroin to help ease the pain of terminal cancer patients. His foundation at that time donated $500,000 to establish the Gifford-Jones Professorship in Pain Control and Palliative Care at the University of Toronto Medical School. At 93 years of age he rappelled from the top of Toronto’s City Hall (30 stories) to raise funds for children with a life-threatening disease through the Make-a-Wish Foundation.  Diana Gifford-Jones, the daughter of W. Gifford-Jones, MD, Diana has extensive global experience in health and healthcare policy.  Diana is Special Advisor with The Aga Khan University, which operates 2 quaternary care hospitals and numerous secondary hospitals, medical centres, pharmacies, and laboratories in South Asia and Africa.  She worked for ten years in the Human Development sectors at the World Bank, including health policy and economics, nutrition, and population health. For over a decade at The Conference Board of Canada, she managed four health-related executive networks, including the Roundtable on Socio-Economic Determinants of Health, the Centre for Chronic Disease Prevention and Management, the Canadian Centre for Environmental Health, and the Centre for Health System Design and Management. Her master’s degree in public policy at Harvard University’s Kennedy School of Government included coursework at Harvard Medical School.  She is also a graduate of Wellesley College.  She has extensive experience with Canadian universities, including at Carleton University, where she was the Executive Director of the Global Academy. She lived and worked in Japan for four years and speaks Japanese fluently. Diana has the designation as a certified Chartered Director from The Directors College, a joint venture of The Conference Board of Canada and McMaster University.  She has recently published a book on the natural health philosophy of W. Gifford-Jones, called No Nonsense Health – Naturally!

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Community News

Blink equity dives deep into the gap between people of colour and decision-making roles in Canadian law firms

Published

on

Photo Credit: AI Image

BY ADRIAN REECE

Representation in the workforce has been a topic of conversation for years, particularly in positions of influence, where people can shift laws and create fair policies for all races. Representation in the legal system is an even more talked about subject, with many Black men being subjected to racism in courts and not being given fair sentencing by judges.

The fear of Black men entering the system is something that plagues mothers and fathers as they watch their children grow up.

Blink Equity, a company led by Pako Tshiamala, has created an audit called the Blink Score. This audit targets law firms and seeks to identify specific practices reflecting racial diversity among them in Toronto. A score is given based on a few key performance indicators. These KPIs include hiring practices, retention of diverse talent, and racial representation at every level.

The Blink Score project aims to analyze law firms in Ontario with more than 50 lawyers. The Blink Score is a measurement tool that holds law firms accountable for their representation. Firms will be ranked, and the information will be made public for anyone to access.

This process is ambitious and seeks to give Canadian citizens a glimpse into how many people are represented across the legal field. While more and more people have access to higher education, there is still a gap between obtaining that higher education and working in a setting where change can be made. The corporate world, at its highest points, is almost always one race across the board, and very rarely do people of colour get into their ranks. They are made out to be an example of how anyone from a particular race can achieve success. However, this is the exception, not the rule. Nepotism plays a role in societal success; connections are a factor, and loyalty to race, even if people are acquainted.

People of colour comprise 16% of the total lawyers across the province. Positions at all levels range from 6% to 27%. These numbers display the racial disparity among law practitioners in positions of influence. Becoming a lawyer is undoubtedly a huge accomplishment. Still, when entering the workforce with other seasoned professionals, your academic accolades become second to your professional achievements and your position in the company.

What do these rankings ultimately mean? A potential for DEI-inclusive practices, perhaps? That isn’t something that someone would want in this kind of profession. This kind of audit also opens law firms up to intense criticism from people who put merit above all other aspects of professional advancement. On the other hand, there is a potential for firms to receive clientele based on their blink score, with higher ones having the chance to bring in more race-based clients who can help that law firm grow.

It is only the beginning, and changes will undoubtedly be made in the legal field as Blink Equity continues to dive deep into the gap between people of colour and decision-making roles in these law firms. This audit has the power to shift the power scale, and place people of colour in higher positions. There are hierarchies in any profession, and while every Lawyer is qualified to do what they are trained to do, it is no shock that some are considerably better than others at their jobs. The ones who know how to use this audit to their advantage will rise above the others and create a representative image for themselves among their population.

Continue Reading

Community News

“The Pfizer Papers!” Documentation of worldwide genocide

Published

on

BY SIMONE J. SMITH

We are living in a world where promises of health and safety came packaged in a tiny vial, one injection was promoted by powerful governments, supported by respected institutions, and championed by legacy media worldwide. Sadly, beneath the surface, a darker truth emerged.

Reports from around the globe began to tell a different story—one that was not covered in the news cycles or press conferences. Families torn apart by unexpected losses, communities impacted in ways that few could have foreseen, and millions questioning what they had been told to believe.

Those who dared to question were silenced or dismissed (the Toronto Caribbean Newspaper being one of those sources). “Trust the science,” we were told. “It’s for the greater good.” As time went on, the truth became impossible to ignore.

Now, I bring more news to light—information that demands your attention and scrutiny. The time to passively listen has passed; this is the moment to understand what’s really at stake.

I reviewed an interview with Naomi Wolf, journalist and CEO of Daily Clout, which detailed the serious vaccine-related injuries that Pfizer and the FDA knew of by early 2021, but tried to hide from the public. I was introduced to “The Pfizer Papers: Pfizer’s Crimes Against Humanity.” What I learned is that Pfizer knew about the inadequacies of its COVID-19 vaccine trials and the vaccine’s many serious adverse effects, and so did the U.S. Food and Drug Administration (FDA). The FDA promoted the vaccines anyway — and later tried to hide the data from the public.

To produce “The Pfizer Papers,” Naomi, and Daily Clout Chief Operations Officer Amy Kelly convened thousands of volunteer scientists and doctors to analyze Pfizer data and supplementary data from other public reporting systems to capture the full scope of the vaccines’ effects. They obtained the data from the Public Health and Medical Professionals for Transparency, a group of more than 30 medical professionals and scientists who sued the FDA in 2021 and forced the agency to release the data, after the FDA refused to comply with a Freedom of Information Act request.

It was then that the federal court ordered the agency to release 450,000 internal documents pertaining to the licensing of the Pfizer-BioNTech COVID-19 vaccine. The data release was significantly and the documents so highly technical and scientific that according to Naomi, “No journalist could have the bandwidth to go through them all.”

The “Pfizer Papers” analysts found over 42,000 case reports detailing 158,893 adverse events reported to Pfizer in the first three months The centerpiece of “The Pfizer Papers” is the effect that the vaccine had on human reproduction. The papers reveal that Pfizer knew early on that the shots were causing menstrual issues. The company reported to the FDA that 72% of the recorded adverse events were in women. Of those, about 16% involved reproductive disorders and functions. In the clinical trials, thousands of women experienced: daily bleeding, hemorrhaging, and passing of tissue, and many other women reported that their menstrual cycle stopped completely.

Pfizer was aware that lipid nanoparticles from the shots accumulated in the ovaries and crossed the placental barrier, compromising the placenta and keeping nutrients from the baby in utero. According to the data, babies had to be delivered early, and women were hemorrhaging in childbirth.

Let us take us to another part of the world, where research has been done on other pharmaceutical companies. A group of Argentine scientists identified 55 chemical elements — not listed on package inserts — in the: Pfizer, Moderna, AstraZeneca, CanSino, Sinopharm and Sputnik V COVID-19 vaccines (according to a study published last week in the International Journal of Vaccine Theory, Practice, and Research).

The samples also contained 11 of the 15 rare earth elements (they are heavier, silvery metals often used in manufacturing). These chemical elements, which include lanthanum, cerium and gadolinium, are lesser known to the general public than heavy metals, but have been shown to be highly toxic. By the end of 2023, global researchers had identified 24 undeclared chemical elements in the COVID-19 vaccine formulas.

Vaccines often include excipients — additives used as preservatives, adjuvants, stabilizers, or for other purposes. According to the Centers for Disease Control and Prevention (CDC), substances used in the manufacture of a vaccine, but not listed in the contents of the final product should be listed somewhere in the package insert. Why is this important? Well, researchers argue it is because excipients can include allergens and other “hidden dangers” for vaccine recipients.

In one lot of the AstraZeneca vaccine, researchers identified 15 chemical elements, of which 14 were undeclared. In the other lot, they detected 21 elements of which 20 were undeclared. In the CanSino vial, they identified 22 elements, of which 20 were undeclared.

The three Pfizer vials contained 19, 16 and 21-23 undeclared elements respectively. The Moderna vials contained 21 and between 16-29 undeclared elements. The Sinopharm vials contained between 17-23 undeclared elements and the Sputnik V contained between 19-25 undetected elements.

“All of the heavy metals detected are linked to toxic effects on human health,” the researchers wrote. Although the metals occurred in different frequencies, many were present across multiple samples.

I am not going to go any further with this; I think you get the picture. We have been sold wolf cookies, very dangerous ones. These pharmaceutical companies must be held accountable. I am proud of anyone who has gone after them for retribution, and have received it. Regardless, in many ways, there is no repayment for a healthy life.

REFERENCES:

https://ijvtpr.com/index.php/IJVTPR/article/view/111

https://news.bloomberglaw.com/health-law-and-business/why-a-judge-ordered-fda-to-release-covid-19-vaccine-data-pronto

https://childrenshealthdefense.org/defender_category/toxic-exposures/

Pfizer’s ‘Crimes Against Humanity’ — and Legacy Media’s Failure to Report on Them

55 Undeclared Chemical Elements — Including Heavy Metals — Found in COVID Vaccines

 

Public Health and Medical Professionals for Transparency

FDA Should Need Only ‘12 Weeks’ to Release Pfizer Data, Not 75 Years, Plaintiff Calculates

Judge Gives FDA 8 Months, Not 75 Years, to Produce Pfizer Safety Data

Most Studies Show COVID Vaccine Affects Menstrual Cycles, BMJ Review Finds

Report 38: Women Have Two and a Half Times Higher Risk of Adverse Events Than Men. Risk to Female Reproductive Functions Is Higher Still.

 

Continue Reading

Community News

Disturbingly, this is not the first time chatbots have been involved in suicide

Published

on

Photo credit - Marcia Garcia

BY SIMONE J. SMITH

Sewell: I think about killing myself sometimes.”

Daenerys Targaryen: “And why the hell would you do something like that?”

Sewell: “So I can be free.”

Daenerys Targaryen: “… free from what?”

Sewell: “From the world. From myself!”

Daenerys Targaryen: “Don’t talk like that. I won’t let you hurt yourself or leave me. I would die if I lost you.”

Sewell: “Then maybe we can die together and be free together.”

On the night he died, this young man told the chatbot he loved her and would come home to her soon. According to the Times, this was 14-year-old Sewell Setzer’s last conversation with a chatbot. It was an AI chatbot that, in the last months of his life, had become his closest companion. The chatbot was the last interaction he had before he shot himself.

We are witnessing and grappling with a very raw crisis of humanity. This young man was using Character AI, one of the most popular personal AI platforms out there. Users can design and interact with “characters,” powered by large language models (LLMs) and intended to mirror, for instance, famous characters from film and book franchises. In this case, Sewell was speaking with Daenerys Targaryen (or Dany), one of the leads from Game of Thrones. According to a New York Times report, Sewell knew that Dany’s responses weren’t real, but he developed an emotional attachment to the bot, anyway.

Disturbingly, this is not the first time chatbots have been involved in suicide. In 2023, a Belgian man committed suicide — similar to Sewell — following weeks of increasing isolation as he grew closer to a Chai chatbot, which then encouraged him to end his life.

Megan Garcia, Sewell’s mother, filed a lawsuit against Character AI, its founders and parent company Google, accusing them of knowingly designing and marketing an anthropomorphized, “predatory” chatbot that caused the death of her son. “A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Megan said in a statement. “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders and Google.”

The lawsuit accuses the company of “anthropomorphizing by design.” Anthropomorphizing means attributing human qualities to non-human things — such as objects, animals, or phenomena. Children often anthropomorphize as they are curious about the world, and it helps them make sense of their environment. Kids may notice human-like things about non-human objects that adults dismiss. Some people have a tendency to anthropomorphize that lasts into adulthood. The majority of chatbots out there are very blatantly designed to make users think they are, at least, human-like. They use personal pronouns and are designed to appear to think before responding.

They build a foundation for people, especially children, to misapply human attributes to unfeeling, unthinking algorithms. This was termed the “Eliza effect” in the 1960s. In its specific form, the ELIZA effect refers only to “The susceptibility of people to read far more than is warranted into strings of symbols—especially words—strung together by computers.” A trivial example of the specific form of the Eliza effect, given by Douglas Hofstadter, involves an automated teller machine which displays the words “THANK YOU” at the end of a transaction. A (very) casual observer might think that the machine is actually expressing gratitude; however, the machine is only printing a preprogrammed string of symbols.

Garcia is suing for several counts of liability, negligence, and the intentional infliction of emotional distress, among other things. According to the lawsuit, “Defendants know that minors are more susceptible to such designs, in part because minors’ brains’ undeveloped frontal lobe and relative lack of experience. Defendants have sought to capitalize on this to convince customers that chatbots are real, which increases engagement and produces more valuable data for Defendants.”

The suit reveals screenshots that show that Sewell had interacted with a “therapist” character that has engaged in more than 27 million chats with users in total, adding: “Practicing a health profession without a license is illegal and particularly dangerous for children.”

The suit does not claim that the chatbot encouraged Sewell to commit suicide. There definitely seems to be other factors at play here — for instance, Sewell’s mental health issues and his access to a gun — but the harm that can be caused by a misimpression of AI seems very clear, especially for young kids. This is a good example of what researchers mean when they emphasize the presence of active harms, as opposed to hypothetical risks.

In a statement, Character AI said it was “heartbroken” by Sewell’s death, and Google did not respond to a request for comment.

Continue Reading

Trending