Human memory may well be a powerful force, but it is also fallible and super malleable. Research into memory has shown that our memories are not completed files sitting in our brains that we can choose to play back at will. Instead, recalling past events is an active process that requires the reconstruction of that event.
Memory’s weird like that. You know how sometimes you are sure about a detail, and then it turns out you were way off? Yeah, like remembering the colour of someone’s shirt, only to find out it was a completely different color. The thing is, you feel like you remember it perfectly.
That’s the tricky part. Memory isn’t like replaying a movie. It’s more like piecing together fragments from different moments. Some parts get clearer, and others fade away, or even get mixed up with other memories. That’s the thing—our memories are a mix of what actually happened and how we felt about what happened. Memory is strange, but it’s powerful. Even when it’s elusive, there’s value in the parts we hold onto.
In our process of creating and recalling memories, the brain first encodes information, then must regularly store that information, then, when needed, recalls that encoded information.
Memory expert Dr. Elizabeth Loftus has said that “New information, new ideas, new thoughts, suggestive information, misinformation can enter people’s conscious awareness and cause a contamination, a distortion, an alteration in memory.”
The legal field, so reliant on memories, has been a significant application of her memory research. This reality of human memory has had many implications for our justice system; in 2020, 69% of DNA exonerations involved wrongful convictions that resulted from eyewitness misidentification.
How can human memories be manipulated?
According to Dr. Loftus, “They can be manipulated when people talk to each other after let’s say some crime is over that they may have both witnessed. They can be manipulated when they are interrogated by an investigator who maybe has an agenda or has a hypothesis about what probably happened and communicates that to the witness even inadvertently. People can be manipulated when they see media coverage about an event, let’s say it’s a high publicity event that is talked about a lot on television, or newspapers. In all of these cases, the opportunity is there for new information, not necessarily accurate information, to contaminate a person’s memory.”
MIT researchers recently decided to specifically study the intersection of judicially related false memory formation and generative AI. They found that GenAI chatbots “significantly increased” false memory formation.
The details: The study involved 200 participants who viewed a brief CCTV video of a robbery. Following the video, they were split randomly into four groups: control, survey-based, pre-scripted chatbot and generative chatbot. The control tested immediate recall; the survey included 25 yes/no questions, with five misleading questions; the pre-scripted chatbot functioned similarly to the survey, though in chatbot form; and the generative chatbot affirmed incorrect answers. An example of a misleading question: “What kind of gun was used at the crime scene?” The weapon was, in fact, a knife.
The results: Compared to other interventions, and to the control, interactions with generative chatbots induced significantly more false memories. 36.4% of users’ responses to the generative chatbot were misled; the average number of false memories in this category was three times higher than the control. After a week, participants in the generative category remained equally confident in their false memories; control and survey participants’ confidence dropped. Researchers noted that the propensity of chatbots to accidentally produce false, or otherwise hallucinatory information significantly amplifies concerns about false memory induction.
Why it matters: The researchers said that the study has significant implications for the deployment of GenAI models in environments — including: legal, clinical and educational — where memory accuracy is vital.
Researchers did note that, so long as ethical considerations remain top-of-mind, this can have a positive impact: For instance, chatbots and language models could be leveraged as tools to induce positive false memories, or help reduce the impact of negative ones, such as in people suffering from post-traumatic stress disorder (PTSD).”
The researchers note that the findings highlight the “need for caution.”