Don’t fall victim to persuasive AI-generated misinformation

In my digital studies class, I asked students to pose a query to ChatGPT and discuss the results. To my surprise, some asked ChatGPT about my biography.

ChatGPT said I received my PhD from two different universities, and in two different subject areas, only one of which represented the focus of my doctoral work.

This made for an entertaining class, but also helped illuminate a major risk of generative AI tools — making us more likely to fall victim to persuasive misinformation.

To overcome this threat, educators need to teach skills to function in a world with AI-generated misinformation.

Worsening the misinformation problem

Generative AI stands to make our existing problems separating evidence-based information from misinformation and disinformation even more difficult than they already are.

Text-based tools like ChatGPT can create convincing-sounding academic articles on a subject, complete with citations that can fool people without a background in the topic of the articleVideo-, audio- and image-based AI can successfully spoof people’s faces, voices and even mannerisms, to create apparent evidence of behaviour or conversations that never took place at all.

As AI-created text and images or videos are combined to create bogus news stories, we should expect to see more attempts by conspiracy theorists and misinformation opportunists to employ these to fool others for their own gain.

While it was possible before generative AI was widely accessible for people to create fake videos, news stories or academic articles, the process took time and resources. Now, convincing disinformation can be created much more quickly. New opportunties are yielded to destabilize democracies around the world.

New critical thinking applications needed

To date, a focus of teaching critical media literacy both at the public school and post-secondary levels has been asking students to engage deeply with a text and get to know it well so they can summarize it, ask questions about it and critique it.

This approach will likely serve less well in an age where AI can so easily spoof the very cues we look to in order to assess quality.

While there are no easy answers to the problem of misinformation, I suggest that teaching these three key skills will better equip all of us to be more resilient in the face of these threats:

1. Lateral reading of texts

Rather than reading a single article, blog or website deeply upon first glance, we need to prepare students with a new set of filtering skills often called lateral reading.

In lateral reading, we ask students to search for cues before reading deeply. Questions to pose include: Who authored the article? How do you know? What are their credentials and are those credentials related to the topic being discussed? What claims are they making and are those claims well supported in academic literature?

Doing this task well implies the need to prepare students to consider different types of research.

Lateral reading means searching for clues before reading deeply. (Shutterstock)

2. Research literacy

In much popular imagination and everyday practice, the concept of research has shifted to refer to an internet search. However, this represents a misunderstanding of what distinguishes the process of gathering evidence.

We need to teach students how to distinguish well-founded evidence-based claims from conspiracy theories and misinformation.

Students at all levels need to learn how to evaluate the quality of academic and non-academic sources. This means teaching students about research quality, journal quality and different kinds of expertise. For example, a doctor could speak on a popular podcast about vaccines, but if that doctor is not a vaccine specialist, or if the total body of evidence doesn’t support their claims, it doesn’t matter how convincing those claims are.

Thinking about research quality also means becoming familiar with things like sample sizes, methods and the scientific process of peer review and falsifiability.

3. Technological literacy

Many people don’t know that AI isn’t actually intelligent, but instead is made up of language- and image-processing algorithms that recognize patterns and then parrot them back to us in a random but statistically significant way.

Similarly, many people don’t realize that the content we see on social media is dictated by algorithms that prioritize engagement, in order to make money for advertisers.

We rarely stop to think why we see the content we’re being shown through these technologies. We don’t think about who creates the technology and how biases of programmers play a role in what we see.

If we can all develop a stronger critical orientation to these technologies, following the money and asking who benefits when we’re served specific content, then we will become more resistant to the misinformation that is spread using these tools.

Through these three skills: lateral reading, research literacy and technological literacy, we will be more resistant to misinformation of all kinds — and less susceptible to the new threat of AI-based misinformation.

Source: The Conversation

Also read: Did you know that AI can deceive us?

UPCOMING EVENT FINTECH22

Read more