How to Help Students Avoid Getting Duped Online — and by AI Chatbots

Many studies show that students are terrible at sorting true facts from misinformation online and on social media. But it’s not because students aren’t good at critical thinking, argues Mike Caulfield, a research scientist at University of Washington’s Center for an Informed Public.

This article is from EdSurge.

In an interview with EdSurge, Mike Caulfield discusses the concept of “critical ignoring” as an essential skill for navigating information online. He explains that the internet is relatively unfiltered compared to traditional media, and users must decide what is worth their attention. He criticizes conventional models that encourage deep critical attention to all information, arguing that this approach can be disastrous on the internet. For example, engaging deeply with a Holocaust denier’s arguments would be inappropriate and unproductive. Instead, he suggests users should quickly evaluate the source’s credibility and decide whether it’s worth their time. He emphasizes that in an age of information abundance, attention is scarce, and learning how to choose better what to invest attention in is a crucial skill.

The SIFT method is a technique for evaluating information, primarily online. Each letter in “SIFT” stands for a step in the process:

  1. S – Stop: When you initially encounter a source of information and start to read it— stop. Ask yourself whether you know and trust the author, publisher, publication, or website if you don’t, use the other fact-checking moves that follow1.
  2. I – Investigate the Source: You don’t have to do a three-hour investigation into a source before you engage with it. But knowing the expertise and agenda of the person who created the source is crucial to your interpretation of the information provided.
  3. F – Find Better Coverage: If the source you find is low-quality, or you can’t determine if it is reliable, perhaps you don’t care about the source—you care about the claim that the source is making. You want to know if it is accurate or false1.
  4. T – Trace Claims, Quotes, and Media to the Original Context: You want to know if it represents a consensus viewpoint or if it is the subject of much disagreement1.

This method was developed by Mike Caulfield, a digital literacy expert at Washington State University1. It’s designed to help users quickly evaluate whether a source is worth their attention.

Here are some examples of the SIFT method that he has developed.

  1. Evaluating a news article: When you come across an article online, you can apply the SIFT method. First, stop and ask yourself if you know the website or source of the information. If not, investigate the source to understand its reputation and credibility. If the source is unreliable, find better coverage from a trusted source. Finally, trace the claims or quotes to their original source1.
  2. Analyzing a social media post: If you see a post on social media making a claim, stop and consider whether you know and trust the author or publisher. Investigate the source to understand their expertise and agenda. If the source is not trustworthy, find better coverage of the claim from a reputable source. Lastly, trace the claims to their original context.

Remember, the SIFT method is not just about determining if something is true or false. It’s about understanding what you’re reading before you read it, knowing the expertise and agenda of the source, and deciding what to invest your attention in.

This method was developed by Mike Caulfield, a digital literacy expert at Washington State University1. It’s designed to help users quickly evaluate whether a source is worthy of their attention.

So how does ChatGPT and other AI providers enter into the equation? A large language model (LLM) like ChatGPT generates responses to text inputs based on patterns it has learned, similar to how your phone’s autocomplete works. It can provide compelling answers and summaries by predicting what people might say in response to certain inputs. However, it doesn’t have communicative goals or a deep understanding of what it’s saying, unlike humans1. Its biggest flaw is its inability to evaluate information in the way a human does

So it presents a pretty compelling answer. It can be good at summary, where there’s a lot of text to put together, a lot of text for it to pull from. But it has some flaws. And the biggest flaw is that it doesn’t really have communicative goals. It doesn’t really know what it’s saying. It’s not able to evaluate things in the way a human is.

And there’s a couple things wrong with that. Without understanding the point of the thing that you’re doing, it can go astray. And that is not as big a problem for experts in a field, because if you’re an expert in something, you go to ChatGPT and you type something in, you can see pretty immediately, ‘Oh, actually this is a helpful summary.’ Or, ‘Oh, no, this has things wrong.’ But it’s not great for novices.

And that’s the problem. I think people have got this upside down. People think, ‘Oh, ChatGPT is going to help a novice be like an expert.’ But in reality, ChatGPT and LLMs are good for experts because they can see when this thing is clearly spouting out bull-. ChatGPT makes it possible for anyone to look like they know what they’re talking about. And it gives a sort of surface that looks very impressive. And so it makes it all the more important that when you see something online that you not say, ‘Oh, is this a scholarly tone? Does this have footnotes?’ Those things are meaningless. Now in the world of LLMs, anybody can write something that looks authoritative and has all the features of authoritative texts without knowing what they’re talking about at all. And so you’ve got to go elsewhere. You’ve got to get off the page [to find out more about the source]. And I think it just makes these skills all the more pressing. EdSurge. By Jeffrey R. Young


I would love to hear from you