Connect with us

Junior Contributors

AI can be a helpful tool, but it can’t replace the special qualities that make human storytelling unique

Published

on

Photo Credits: Diana Grytsku

BY YAHYA KARIM

It has recently been found that people take a strong dislike to any stories that are labeled AI generated, even if they were truly written by a human. This might come as a surprise, but it shows how we feel about AI in creative work, such as writing stories.

The study, led by Haoran Chu, a professor of public relations, tested out how people would feel after reading two similar stories, one written by a human and one written by an AI. The participant in the study would find a label indicating which writing was AI generated, and which was written by a human. Through the experiment they switched the labels, so the readers thought they were reading an AI version of the story, when in fact they were reading the human version.

When people saw that it was an AI generated story, they seemed less interested. Even though what was written was almost identical, people did not feel that connection with the AI writing. They felt less connected if it was written by a machine.

The study found that stories written by AI can convince people just as much as those written by humans, especially when it comes to topics like health. However, AI doesn’t make readers feel as connected or immersed in the story. This is something that human writers are better at doing.

The study shows that people still really value the “human touch” in storytelling. Even though AI can write text that is clear and logical, it doesn’t have the same: emotional touch, creativity, or personal feel that humans bring to their writing. When we read a story, we trust human writers to make it feel relatable, which is something AI struggles to do.

In the future, AI might be great for writing simple facts, or basic information, but when it comes to creating deep, emotional stories humans still have the advantage. AI can be a helpful tool, but it can’t replace the special qualities that make human storytelling unique.

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Junior Contributors

How are current trends shaping our world? Foreshadowing 2025

Published

on

Credits: raw.pixel

BY AMARI SUKHDEO

As we look toward 2025, it’s natural to wonder how current trends will shape our world. Drawing from technological advances, societal shifts, and environmental challenges observed in 2024, we can outline some plausible developments. By connecting these to existing evidence, we can better understand why these changes are likely.

Smarter AI, beyond assistants

In 2024, AI systems became more personalized and efficient, with companies investing in AI models that function locally to reduce delays and energy use. By 2025, we could see AI systems embedded in everyday tools. For instance, AI in healthcare already assists with diagnostics; next, it may empower wearable devices to provide personalized treatment suggestions based on real-time data. This isn’t just speculation; investments by tech giants like Google and Microsoft into smaller, faster AI models in 2024 lay the groundwork for this leap forward.

 Climate action driving everyday innovations

Extreme weather events and rising global temperatures made headlines in 2024, pushing governments and industries to accelerate renewable energy efforts. Solar panels and wind farms became more efficient, and electric vehicles (EVs) gained broader adoption as costs fell. In 2025, we could reasonably expect community-level energy storage solutions, like localized solar grids in neighbourhoods, offering resilience against power outages caused by climate disruptions. The sharp drop in renewable energy costs in 2024 suggests this trend will only accelerate.

Job markets evolve with AI

The growing use of AI in hiring processes was evident in 2024, with more companies testing conversational bots for initial candidate screenings. By 2025, these bots could standardize equitable hiring practices, focusing on skills rather than credentials. If trends from 2024 hold, candidates might submit work samples directly analyzed by AI, bypassing biases inherent in traditional resumes.

 Space exploration as the next frontier

Private companies made significant progress in space technology in 2024, with initiatives like reusable rockets and plans for lunar missions advancing rapidly. In 2025, commercial space tourism could become a niche industry, offering suborbital flights for high-net-worth individuals. More importantly, the ongoing development of satellite networks for global internet coverage—spearheaded by firms like SpaceX—may revolutionize connectivity in rural and underserved areas, fulfilling the promises set in motion during 2024.

Social media’s evolution

The spread of misinformation and rising concerns about mental health dominated conversations about social media in 2024. By 2025, we might see stricter regulations and innovations in platform design aimed at promoting responsible usage. For instance, algorithms may prioritize verified information or feature built-in mental health support, echoing growing public demand for ethical practices observed last year.

The predictions for 2025 are rooted in developments already in motion. Rapid strides in AI and renewable energy, coupled with societal responses to climate challenges, social media, and space suggest a future where technology is more integrated into daily life and becomes a norm. However, navigating these changes will require continued investment and ethical oversight.

https://www.eckerson.com/articles/predictions-2025-everything-is-about-to-change

Continue Reading

Junior Contributors

Would you try tasting the virtual world? It’s just the beginning of something really cool!

Published

on

BY KHADIJA KARIM

Have you ever wished you could taste something while using virtual reality? Well, researchers from the City University of Hong Kong have come up with a way to make that happen! They invented a lollipop-shaped device that lets you taste different flavours while wearing a VR headset. It sounds like something from a futuristic movie, but it’s real!

Here’s how it works, the device holds flavoured gels, like cherry, milk, and green tea. When you put on the VR headset, a Bluetooth signal tells the lollipop what flavour to produce. A tiny electric current is then sent through the gel using a process called iontophoresis. This makes the flavour appear on your taste buds. Lollipop even uses smells to make the flavour taste real. So, when you lick the lollipop, it really does taste like the flavour you’re supposed to be experiencing.

There are some limits to the device. Right now, it can only produce nine preloaded flavours. Another issue is that the gel dries out after about an hour, so you can’t use it for too long. However, this technology is still much better than the old methods used to try and create virtual taste. In the past, some methods involved chemicals that had to be placed on your tongue, but that wasn’t easy to use. Another method had people stick electrodes to their tongues, which sounds pretty uncomfortable.

You might wonder why we need virtual taste at all. Researchers say it could be useful in medicine. For example, it could help doctors test for gustatory disorders, which are problems with taste. Imagine going to the doctor for a test where they check if you can tell the difference between the taste of milk and grapefruit. That’s something the VR lollipop could help with!

This invention could also make shopping more fun. Think about it, if you’re shopping online for snacks, or drinks, you could taste them virtually before buying. It’s like trying a sample at the store, but you can do it from your own home. It may sound a little silly, but it could help people make better decisions when buying food.

This new technology shows just how much virtual reality is changing. We’ve had visuals and sound in VR for a while, but now taste is becoming part of the experience. Who knows what’s next? Maybe in the future, we’ll taste food in VR games or try out recipes in a virtual kitchen. Even though it’s still new, it’s exciting to think about the possibilities. Would you try tasting the virtual world? It’s just the beginning of something really cool!

Continue Reading

Junior Contributors

Should we be using AI in fields so important as pensions, unemployment aid, or childcare support?

Published

on

BY KAHA GEDI

We all knew technology would someday get out of hand, however what we should be focusing on is whether it already has. From people using AI generators as life support to the point where we can’t easily distinguish AI from RI (real intelligence). It’s even being used in finance and banking for fraud detection.

The use of AI has definitely tilted to both sides of the balancing scale of how beneficial, or bad it is for us. Now, let me ask you a question, what if AI were to get into the wrong hands? The hands of people who are supposed to protect you from harm, and let’s say they use it through a faulty algorithm with the intent to improve basic living standards for citizens who may be: unemployed, disabled, elderly, or otherwise unable to support themselves through work, or other means. In this article, I will discuss what is currently going on in the Danish welfare authority, or the Udbetaling Danmark (UDK).

According to Amnesty International, “Fraud detection algorithms, paired with mass surveillance practices, have led people to unwillingly–or even unknowingly–forfeit

their right to privacy and created an atmosphere of fear.” Hellen Mukiri-Smith, who is a researcher on Artificial Intelligence and Human Rights states that “this mass surveillance has created a social benefits system that risks targeting, rather than supporting the very people it was meant to protect.”

The Danish welfare authority (UDK) has partnered with ATP (Denmark’s mandatory labor market pension system, designed to provide financial security for workers after they retire) and private corporations (like NNIT) which is a Danish IT services/consulting company that specializes in providing IT solutions/technology services to businesses. They did this in order to develop fraud detection algorithms aimed at identifying social benefits fraud. The UDK uses up to 60 algorithms to flag potential fraud, but these systems are highly invasive and not transparent.

In order to feed these algorithms, the Danish government collects massive amounts of personal data about its citizens (such as: residency, citizenship, family ties, and travel history). This includes sensitive information that can be used to track and monitor individuals’ lives, raising major privacy concerns.

Furthermore, Amnesty International found that the algorithms unjustly affected marginalized groups such as migrants, low-income people, and those in non-traditional living arrangements. For example, one algorithm flags people with “unusual” living patterns like families living far away from each other, such as in care facilities due to disability.

Helen Mukiri-Smith argues that the way UDK and ATP are using AI for fraud detection closely resembles a social scoring system, which is prohibited under the new EU law (AI Act), and that it should be banned.

For those who don’t know what the social scoring system is, it’s a way of evaluating people based on: their behavior, financial habits, and other personal information. It gives people a score that could impact their access to things like loans, services, or opportunities. These systems use data such as your activity on social media, payments, and sometimes criminal records to determine how people are scored. The main issue with social scoring is that it can be unfair, especially if it’s based on biased, or incomplete information. This can influence your access to critical services, such as healthcare, or housing.

Moreover, the surveillance and constant questioning of these individuals will take a toll on them mentally. They’re constantly being investigated by case workers and fraud investigators, and undoubtedly it will make them live in fear, and prevent them from living stable lives.

Amnesty International pushes for the Danish authorities to stop using discriminatory data in fraud detection. They also urge that Denmark make sure its fraud detection systems actually follow the human rights laws, including the EU’s AI Act, which bans practices like social scoring.

Nobody deserves to live in fear, especially at the expense of things so crucial for their well-being. AI makes mistakes. I remember I used ChatGPT, and it gave me the wrong answer and when I told it that it was wrong, it said “Oh sorry.” I know that AI is evolving for “better,” but there are no “Oh, sorry’s,” when these algorithms are making mistakes and screwing with people’s livelihood. This begs the question, should we be using AI in fields so important as pensions, unemployment aid, or childcare support?

Continue Reading

Trending